You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently ML.NET doesn't provide any methods in the API for calculating accuracy of forecasted values compared to real observed values - this is when doing Time Series forecasting using SSA.
In looking at the existing TLC documentation, it provided the following calculations - ML.NET should do something similar:
Error Calculator
Once the expected value is produced by the time-series modeler component, it is compared against the actual observed value for the series at the time step to compute the amount of deviation. This calculation is done by the error calculator component and the result is called the Raw Score. The implicit assumption here is that the higher the absolute value of raw score at a timestamp, the more likely it is that the time-series is exhibiting an anomalous behavior at that timestamp. In TLC, we have implemented 5 error calculation functions that can be chosen by the user depending on the application.
Signed Difference
The difference between the expected value and the observed value (this is the default error calculation function).
Absolute Difference
The absolute difference between the expected value and the observed value.
Signed Proportion
The proportional difference between the expected value and the observed value.
Absolute Proportion
The absolute proportional difference between the expected value and the observed value.
Squared Difference
The squared difference between the expected value and the observed value.
The text was updated successfully, but these errors were encountered:
ML.NET 1.3
Currently ML.NET doesn't provide any methods in the API for calculating accuracy of forecasted values compared to real observed values - this is when doing Time Series forecasting using SSA.
In looking at the existing TLC documentation, it provided the following calculations - ML.NET should do something similar:
Error Calculator
Once the expected value is produced by the time-series modeler component, it is compared against the actual observed value for the series at the time step to compute the amount of deviation. This calculation is done by the error calculator component and the result is called the Raw Score. The implicit assumption here is that the higher the absolute value of raw score at a timestamp, the more likely it is that the time-series is exhibiting an anomalous behavior at that timestamp. In TLC, we have implemented 5 error calculation functions that can be chosen by the user depending on the application.
Signed Difference
The difference between the expected value and the observed value (this is the default error calculation function).
Absolute Difference
The absolute difference between the expected value and the observed value.
Signed Proportion
The proportional difference between the expected value and the observed value.
Absolute Proportion
The absolute proportional difference between the expected value and the observed value.
Squared Difference
The squared difference between the expected value and the observed value.
The text was updated successfully, but these errors were encountered: