PySpark : Combining Machine Learning & Big Data
With the ever increasing flow of data, comes the industry focus on how to use those data for driving business & insights; but what about the size of the data these days, we have to deal with ?
The more cleaner data you have, its good for training your ML ( Machine Learning ) models, but sadly neither the world feeds you clean data nor the huge amount of data is capable of fast processing using common libraries like Pandas etc.
How about using the potential of big data libraries with support in Python to deal with this huge amount of data for deriving business insights using ML techniques? But how can we amalgamate the two?
Here comes “ **PySpark : Combining Machine Learning & Big Data** “.
Usually people in the ML domain prefer using Python; so combining the potential of Big Data technologies like Spark etc to supplement ML is a matter of ease with pyspark ( A Python package to use the Spark’s capabilities ).
**This talk would revolve around** -
1) Why do we need to fuse Big Data with Machine Learning ?
2) How Spark’s architecture will help us boost our preparations for faster ML ?
3) How pyspark’s MLlib ( Machine Learning library ) helps you do ML so seamlessly ?
Other sessions from: Global AI Student Conference
Build a quiz generator from your notes
A demonstration on how to create a quiz generator which takes the pictures...Rama Soumya Naraparaju
Real Time Object Detection With TensorFlow
In this session, I will discuss about my project "Sign language detection...Nigama Vajjula
How to protect the oceans with AI and Open Source?
Surfrider Foundation Europe NGO has become a reference in the fight for the...Christopher Maneu