Exit the current Spark shell by holding down CTRL + D. Spark Shell is not only available in Scala but also in Python. Type in expressions to have them evaluated. Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 11.0.8) Spark context available as 'sc' (master = local, app id = local-1599706095232). To adjust logging level use sc.setLogLevel(newLevel). Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties using builtin-java classes where applicable WARNING: All illegal access operations will be denied in a future releaseĢ0/09/09 22:48:09 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform. WARNING: Use -illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: Please consider reporting this to the maintainers of .Platform WARNING: Illegal reflective access by .Platform (file:/opt/spark/jars/spark-unsafe_2.12-3.0.1.jar) to constructor (long,int) Thanks for your inputs.WARNING: An illegal reflective access operation has occurred Error opening stream: HTTP 404: Not Found (Kernel does not exist: 5bac4936-6c30-45b3-bb78-c82469d58dd3)Ĭan any of you please help me out? I am probably doing something wrong here. Log4j:WARN Please initialize the log4j system properly. Log4j:WARN No appenders could be found for logger (.ShutdownHookManager). using builtin-java classes where applicableĮxception in thread "main" : scala/App$classĪt $.(Main.scala:24)Īt .main(Main.scala)Īt java.base/.invoke0(Native Method)Īt java.base/.invoke(NativeMethodAccessorImpl.java:62)Īt java.base/.invoke(DelegatingMethodAccessorImpl.java:43)Īt java.base/.invoke(Method.java:566)Īt .JavaMainApplication.start(SparkApplication.scala:52)Īt .$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)Īt .SparkSubmit.doRunMain$1(SparkSubmit.scala:180)Īt .SparkSubmit.submit(SparkSubmit.scala:203)Īt .SparkSubmit.doSubmit(SparkSubmit.scala:90)Īt .SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)Īt .SparkSubmit$.main(SparkSubmit.scala:1016)Īt .SparkSubmit.main(SparkSubmit.scala)Ĭaused by: : scala.App$classĪt java.base/(URLClassLoader.java:471)Īt java.base/(ClassLoader.java:589)Īt java.base/(ClassLoader.java:522)
WARNING: All illegal access operations will be denied in a future releaseĢ1/01/27 01:40:17 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform.
WARNING: An illegal reflective access operation has occurred Starting Spark Kernel with SPARK_HOME=/opt/spark/Ģ1/01/27 01:40:16 WARN Utils: Your hostname, deep-VirtualBox resolves to a loopback address: 127.0.1.1 using 10.0.2.15 instead (on interface enp0s3)Ģ1/01/27 01:40:16 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address Logs NOTICE python R README.md RELEASE sbin work yarnĪnd the errors - KernelRestarter: restarting kernel (4/5), keep random ports
So I am attaching the directory information where spark has been installed - :~$ cd /opt/spark/īin conf data examples jars kubernetes LICENSE licenses
I believe the install locations for both Apache Spark and Toree need to be in the same directory but I can't pip install toree into the same folder. I am trying to install Apache Toree in order to use Spark in the Jupyter notebooks. So I have installed Spark in the location /opt/spark/ (Ubuntu).