How You Can Automate ML Experiment Tracking With Vertex AI Experiments Autologging

Practical machine learning (ML) is a trial and error process. ML practitioners compare different performance metrics by running ML experiments till you find the best model with a given set of parameters. Because of the experimental nature of ML, there are many reasons for tracking ML experiments and making them reproducible including debugging and compliance.

But tracking experiments is challenging: you need to organize experiments so that other team members can quickly understand, reproduce and compare them. That adds overhead that you don’t need.

We are happy to announce Vertex AI Experiments autologging, a solution which provides automated experiment tracking for your models, which streamlines your ML experimentation

With Vertex AI Experiments autologging, you can now log parameters, performance metrics and lineage artifacts by adding one line of code to your training script without needing to explicitly call any other logging methods.

How to use Vertex AI autologging

As a data scientist or ML practitioner, you conduct your experiment in a notebook environment such as Colab or Vertex AI Workbench. To enable Vertex AI Experiments autologging, you call aiplatform.autolog() in your Vertex AI Experiment session. After that call, any parameters, metrics and artifacts associated with model training are automatically logged and then accessible within the Vertex AI Experiment console.

Here’s how to enable autologging in your training session with a Scikit-learn model.

# Enable autologging
aiplatform.autolog()

# Build training pipeline
ml_pipeline = Pipeline(...)

# Train model
ml_pipeline.fit(x_train, y_train)

This video shows parameters and training/post-training metrics in the Vertex AI Experiment console.

Vertex AI Experiments – Autologging

Vertex AI SDK autologging uses MLFlow’s autologging in its implementation and it supports several frameworks including XGBoost, Keras and Pytorch Lighting. See documentation for all supported frameworks.

Vertex AI Experiments autologging automatically logs model time series metrics when you train models along multiple epochs. That’s because of the integration between Vertex AI Experiments autologging and Vertex AI Tensorboard.

Furthermore, you can adapt Vertex AI Experiments autologging to your needs. For example, let’s say your team has a specific experiment naming convention. By default, Vertex AI Experiments autologging automatically creates Experiment Runs for you without requiring you to call `aiplatform.start_run()` or `aiplatform.end_run()`. If you’d like to specify your own Experiment Run names for autologging, you can manually initialize a specific run within the experiment using aiplatform.start_run() and aiplatform.end_run() after autologging has been enabled.

What’s next

You can access Vertex AI Experiments autologging with the latest version of Vertex AI SDK for Python. To learn more, check out these resources :

While I’m thinking about the next blog post, let me know if there is Vertex AI content you’d like to see on Linkedin or Twitter.

By: Ivan Nardini (Customer Engineer) and May Hu (Product Manager, Google Cloud)
Originally published at: Google Cloud Blog

Source: Cyberpogo



For enquiries, product placements, sponsorships, and collaborations, connect with us at hello@globalcloudplatforms.com. We'd love to hear from you!


Our humans need coffee too! Your support is highly appreciated, thank you!

Total
0
Shares
Previous Article

10 Modern TV Concepts That Look Great In Contemporary Design

Next Article

10 Home Decor Brands That Are Rooted In Sustainability And Craftsmanship

Related Posts