May 1, 2023
Last updated
Was this helpful?
Last updated
Was this helpful?
We have the following updates to our SDK 1.2.4
Support for Py3.10
Support for Pytorch 2.0.0 for experiment tracking
Removing dependency on KeyChain
Support for XGBoost and Scikit-learn for experiment tracking
Please note that you will need to update your SDK. Please follow the instructions to update your SDK installation here
We have added a new feature that allows users to download the results of exploratory data analysis (EDA) through our platform.
To use this feature, go to the Full Analysis tab on the dataset details page, select your preferred EDA, and let MarkovML do the rest. Once the analysis is complete, you can download the results directly from the platform.
We're excited to announce a new feature that will help you evaluate the quality of your labeled text datasets more easily. You can now view your datasets' Label Quality Estimate score at the top right of the dataset details page. This score is designed to give you an overall idea of the accuracy of the labeling in your dataset.
Additionally, in the Embeddings tab, you can now hover over the points to see the confidence estimates for each row of your dataset. This feature provides granular insights into the quality of the labeling.
We have revamped the UI of the evaluation comparison page to make it more intuitive and simpler to use.
.You will now be able to update the name and description of a project, evaluation, and datasets from the respective details page
You can now see usage data for your workspace. This means you can track the resources consumed by your account.