Accelerated Machine Learning with Google Cloud and NVIDIA
Learn how to use NVIDIA cuDF and cuML within a Google Cloud Colab Enterprise environment to dramatically accelerate end-to-end machine learning workflows with zero code changes.
Go back
Join the community
Join today to access an exclusive forum, joint learning paths, rewards, and earn badges.
Accelerated Machine Learning with Google Cloud and NVIDIA
This hands-on lab demonstrates how to use NVIDIA cuDF and cuML within a Google Cloud Colab Enterprise environment to dramatically accelerate end-to-end machine learning workflows with zero code changes.
- Set up a cloud environment: Configure and connect to a GPU-accelerated runtime using Colab Enterprise runtime templates.
- Accelerate your pipeline instantly: Use
cuDFandcuMLto accelerate standardpandasdata preparation andscikit-learnmodel training code without modification. - Train models faster: Use GPU-accelerated algorithms including
XGBoost,RandomForest, andLinearRegressionto predict outcomes on large datasets. - Benchmark and profile performance: Compare end-to-end execution times between CPU and GPU, and use profiling magic commands to identify execution fallbacks.
Accelerate ML workflows with zero code rewrites on Google Cloud
Speed up your pandas and scikit-learn machine learning workflows by up to an order of magnitude on Google Cloud’s Colab Enterprise using NVIDIA GPUs. With zero code rewrites, just a single extension-loading command.
Accelerated Machine Learning with Google Cloud and NVIDIA Quiz
Pass the quiz to earn a badge.