Have you ever wanted a button that would give you the answers you’re looking for at the drop of a hat? Wish granted. Now the CORTEX AutoEvidence feature does just that for you. (Well, as far as modeling your data goes. But not with life. We haven’t quite mastered that algorithm yet).
The AutoEvidence Experience
Previously, using CORTEX for optimization problems required the user to manually search for the highest probability of a target bin by applying evidence and evaluating how that affects the target variable. This can be a time-consuming process if there are multiple variables with multiple bins, and we all know time is money. With the CORTEX AutoEvidence feature, this process is automated - providing the user with the results they are looking for without having to do the legwork.
How does it work?
This feature provides a seamless add-on user experience to Bayesian models. Users are allowed to apply “known” or “given” inferences, define experimental variables for the platform to analyze, and describe a desired state. Once this information is provided, the platform will give a combination of bins for the experimental variables which maximizes the probability of the desired state. Meaning you get the answer you’re looking for in a fraction of the time.
1. The user can define hard evidence
2. Click the “AutoEvidence” button and enter the following information:
- Select target variable (should be displayed)
- Select target range
- Select Criterion Options: Maximize or Minimize
- Experimental variables will be displayed and selected from this screen.
3. When all info is completed, click “Run Analysis”.
The stability index calculates a metric by reviewing the average deviation of the probability of your target bin and the neighboring bins. This helps you understand the availability of data to support the prediction.
- When data was in good range in target
- When data was in bad range (binary target good/bad)
- Neighboring combination for continuous bins. (Is there good-range or bad-range data in the neighboring bins?)
- Average deviation in neighboring bins for the probability results. (does the probability change drastically?)
- Factor based on positive row count (when things were good) + neighboring count
What does the output look like?
- You will see the top (most stable) settings based on the criteria you provided.
- We can output the top 5 settings as saved evidence profiles
- Most stable models meet the following criteria
- Highest or Lowest probability (based on user defined criteria)
- Number of records in a "good" state given the recommended range and stability of neighboring bins
- Number of records in a "bad" state given the recommended range and stability of neighboring bins
- Variance of neighboring bins