Livy provides an interesting way to use Spark as a RESTful service. In my opinion, this is not an ideal way to interact with Spark, however. There is just a tad too much overhead of language interoperability to make it worth it. For starters, sending strings of Scala code over the wire doesn’t inspire a lot of confidence.
Read MoreFor the last year or so I’ve been blogging regularly about the Apache Spark platform. During that time, Spark has grown from something that people in data science and engineering have used to something that is almost ubiquitous. I’ve enjoyed working with the platform professionally, and even on a number of personal projects.
Read MoreGraphFrames allow us to do exactly this. It’s an API for doing Graph Analytics on Spark DataFrames. This way, we can try to recreate SQL queries in Graphs and have a better grasp of the graph concepts. Not having to load the data and create the relationships makes a lot of difference in a pedagogical context (At least I’ve found).
Read MoreThe past few weeks I’ve been testing Amazon Athena as an alternative to standing up Hadoop and Spark for ad hoc analytical queries. During that research, I’ve been looking closely at file formats for the style of data stored in S3 for Athena. I have typically been happy with Apache Parquet as my go-to, because of it’s popularity and guarantees, but some research pointed me to Apache ORC and it’s advantages in this context.
Read MoreIn a recent project, I wanted to do text searches over a large unstructured dataset (100 GB) in memory and I was able to do it in Spark once I provisioned a machine with enough memory. I was able to do it quickly and efficiently, but I was bugged that I couldn't compress the data and had to spin up a master with that much memory.
Read More