PySpark : Combining Machine Learning & Big Data
With the ever increasing flow of data, comes the industry focus on how to use those data for driving business & insights; but what about the size of the data these days, we have to deal with ?
The more cleaner data you have, its good for training your ML ( Machine Learning ) models, but sadly neither the world feeds you clean data nor the huge amount of data is capable of fast processing using common libraries like Pandas etc.
How about using the potential of big data libraries with support in Python to deal with this huge amount of data for deriving business insights using ML techniques? But how can we amalgamate the two?
Here comes “ **PySpark : Combining Machine Learning & Big Data** “.
Usually people in the ML domain prefer using Python; so combining the potential of Big Data technologies like Spark etc to supplement ML is a matter of ease with pyspark ( A Python package to use the Spark’s capabilities ).
**This talk would revolve around** -
1) Why do we need to fuse Big Data with Machine Learning ?
2) How Spark’s architecture will help us boost our preparations for faster ML ?
3) How pyspark’s MLlib ( Machine Learning library ) helps you do ML so seamlessly ?
Other sessions from: Global AI Student Conference
Talk and Demo on Sound Identification and Classification with Tensorflow and Librosa
I aspire to conduct an interactive and implementation based workshop on the...Raghav Rawat
My Story, the Story AI Tells; Bias & Privacy
With the new world of AI, there are ethical considerations with implementa...Alexia Georghiou
Help! I can't find what I need.
We live in a world full of unstructured data, with data that is not easy to...Sammy Deprez