How to use Livy server to submit Spark job through a REST interface


Since we have started our Hadoop journey and more particularly developing Spark jobs in Scala and Python having a efficient development environment has always been a challenge.

What we currently do is using a remote edition via SSH FS plugins in VSCode and submitting script in a shell terminal directly from one of our edge nodes.

VSCode is a wonderful tool but it lacks the code completion and suggestion as well as tips that increase your productivity, at least in PySPark and Scala language. Recently I have succeeded to configure the community edition of Intellij from IDEA to submit job on my desktop using the data from our Hadoop cluster. Aside this configuration I have also configured a local Spark environment as well as sbt compiler for Scala jobs. I will share soon an article on this…

One of my teammate suggested the use of Livy so decided to have a look even if at the end I have been a little disappointed by its capability…

Livy is a Rest interface from which you interact with a Spark Cluster. In our Hadoop HortonWorks HDP 2.6 installation the Livy server comes pre-installed and in short I had nothing to do to install or configure it. If you are in a different configuration you might have to install and configure by yourself the Livy server.


On our Hadoop cluster Livy server came with Spark installation and is already configured as such:


To understand on which server, and if process is running, follow the link of Spark service home page as follow:


You finnally have server name and process status where Livy server is running:


You can access it using your preferred browser:


Curl testing

Curl is, as you know, a tool to test resources across a network. Here we will use it to access the Livy Rest API resources. Interactive commands to this REST API can be done using Scala, Python and R.

Even if the official documentation is around Python scripts I have been able to resolve an annoying error using Curl. I have started with:

# curl -X POST --data '{"kind": "pyspark"}' -H "Content-Type: application/json"
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 400 </title>
<h2>HTTP ERROR: 400</h2>
<p>Problem accessing /sessions. Reason:
<pre >    Missing Required Header for CSRF protection.</pre ></p>
<hr /><i><small>Powered by Jetty://</small></i>

I have found that livy.server.csrf_protection.enabled parameter was set to true in my configuration so I had to specify an extra parameter in header request using X-Requested-By: parameter:

# curl -X POST --data '{"kind": "pyspark"}' -H "Content-Type: application/json" -H "X-Requested-By: Yannick"
"log":["stdout: ","\nstderr: ","\nYARN Diagnostics: "]}

It returns that session id 8 has been created for me which can be confirmed graphically:


Session is in idle state:

# curl
"log":["stdout: ","\nstderr: ","Warning: Master yarn-cluster is deprecated since 2.0. Please use master \"yarn\" with specified deploy mode instead.","\nYARN Diagnostics: "]}

Let submit a bit of work to it:

# curl -H "X-Requested-By: Yannick" -X POST -H 'Content-Type: application/json' -d '{"code":"2 + 2"}'
{"id":0,"code":"2 + 2","state":"waiting","output":null,"progress":0.0}

We can get the result using:

# curl
{"total_statements":1,"statements":[{"id":0,"code":"2 + 2","state":"available","output":{"status":"ok","execution_count":0,"data":{"text/plain":"4"}},"progress":1.0}]}

Or graphically:


To optionally cleanup session:

# curl -H "X-Requested-By: Yannick" -X DELETE

Python testing

Python is what Livy documentation is pushing by default to test the service. I have started by installing requests package (as well as upgrading pip):

PS D:\> python -m pip install --upgrade pip --user
Collecting pip
  Downloading (1.4MB)
     |████████████████████████████████| 1.4MB 2.2MB/s
Installing collected packages: pip
  Found existing installation: pip 19.2.3
    Uninstalling pip-19.2.3:
Installing collected packages: pip
Successfully installed pip-19.3.1
WARNING: You are using pip version 19.2.3, however version 19.3.1 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.

Requests package installation:

PS D:\> python -m pip install requests --user
Collecting requests
  Downloading (57kB)
     |████████████████████████████████| 61kB 563kB/s
Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests)
  Downloading (125kB)
     |████████████████████████████████| 133kB 6.4MB/s
Collecting certifi>=2017.4.17 (from requests)
  Downloading (154kB)
     |████████████████████████████████| 163kB 3.3MB/s
Collecting chardet<3.1.0,>=3.0.2 (from requests)
  Downloading (133kB)
     |████████████████████████████████| 143kB 6.4MB/s
Collecting idna<2.9,>=2.5 (from requests)
  Downloading (58kB)
     |████████████████████████████████| 61kB 975kB/s
Installing collected packages: urllib3, certifi, idna, chardet, requests
  WARNING: The script chardetect.exe is installed in 'C:\Users\yjaquier\AppData\Roaming\Python\Python38\Scripts' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed certifi-2019.9.11 chardet-3.0.4 idna-2.8 requests-2.22.0 urllib3-1.25.7

Python testing has been done using Python 3.8 installed on my Windows 10 machine. Obviously you can also see graphically what’s going on but as it is exactly the same as with curl I will not share again the output.

Session creation:

>>> import json, pprint, requests, textwrap
>>> host = ''
>>> data = {'kind': 'spark'}
>>> headers = {'Content-Type': 'application/json', 'X-Requested-By': 'Yannick'}
>>> r = + '/sessions', data=json.dumps(data), headers=headers)
>>> r.json()
{'id': 9, 'appId': None, 'owner': None, 'proxyUser': None, 'state': 'starting', 'kind': 'spark', 'appInfo': {'driverLogUrl': None, 'sparkUiUrl': None},
'log': ['stdout: ', '\nstderr: ', '\nYARN Diagnostics: ']}

Session status:

>>> session_url = host + r.headers['location']
>>> r = requests.get(session_url, headers=headers)
>>> r.json()
{'id': 9, 'appId': 'application_1565718945091_261793', 'owner': None, 'proxyUser': None, 'state': 'idle', 'kind': 'spark',
'appInfo': {'driverLogUrl': '',
'sparkUiUrl': ''},
'log': ['stdout: ', '\nstderr: ', 'Warning: Master yarn-cluster is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.', '\nYARN Diagnostics: ']}

Note the url to directly access your Spark UI server to have an even better description of your workload…

Job submission:

>>> statements_url = session_url + '/statements'
>>> data = {'code': '1 + 1'}
>>> r =, data=json.dumps(data), headers=headers)
>>> r.json()
{'id': 0, 'code': '1 + 1', 'state': 'waiting', 'output': None, 'progress': 0.0}

Job result:

>>> statement_url = host + r.headers['location']
>>> r = requests.get(statement_url, headers=headers)
>>> pprint.pprint(r.json())
{'code': '1 + 1',
 'id': 0,
 'output': {'data': {'text/plain': 'res0: Int = 2'},
            'execution_count': 0,
            'status': 'ok'},
 'progress': 1.0,
 'state': 'available'}

Optional session deletion:

>>> r = requests.delete(session_url, headers=headers)
>>> r.json()
{'msg': 'deleted'}


About Post Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>