get_pareto_optimal_parameters(use_model_predictions=True) returns empty dictionary
See original GitHub issueHello,
I am using the Service API for MOO with the default MOO model and I am unable to extract the Pareto optimal parameters with get_pareto_optimal_parameters(use_model_predictions=True)
. What could be the reason for that? The command returns me some parameterizations when I set use_model_predictions=False
. However, they don’t seem to align with the plot that I have created using the compute_posterior_pareto_frontier with that same model. Do you know why that might be?
Issue Analytics
- State:
- Created a year ago
- Comments:8 (6 by maintainers)
Top Results From Across the Web
Dictionary returned empty from function - Stack Overflow
I'm writing a function that takes in a fasta file that may have multiple sequences and returns a dictionary with the accession number...
Read more >Create Empty Dictionary in Python (5 Easy Ways) - FavTutor
In Python, we can use the zip() and len() methods to create an empty dictionary with keys. This method creates a dictionary of...
Read more >Ch 1 - projection returning empty dictionary - M121 - MongoDB
I'm on the computing fields lab. I've converted the title field into an array, I've set up a $cond to return single element...
Read more >Python: Check if a Dictionary is Empty (5 Ways!) - Datagy
Learn how to check if a Python dictionary is empty, ... We can see that both of these methods have returned empty dictionaries....
Read more >Python | Check if dictionary is empty - GeeksforGeeks
Sometimes, we need to check if a particular dictionary is empty or not. ... but here, passing an empty string returns a False, ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The source of the bug is that the Pareto frontier is being computed on
Y
values in transformed space, but the thresholds remain in the original space (whether they are inferred or provided). Here’s a repro:In this example, the original Y values range from 0 to 9 for each metric. A threshold of 7.596 is inferred for each metric. But the Pareto frontier is being computed on normalized values, which range from -1.46 to 1.46. Since 1.46 is less than 7.596, none of the values qualifies for inclusion in the Pareto frontier.
Hello @lena-kashtelyan, thanks for responding so quickly! My implementation is very similar to the one shown in the Service API tutorials. The main difference is that I evaluate the objective function in external software and bring the results back to Python through a pickled dictionary. Additionally, I made the code to support parallel evaluations with sequential batch creation. I previously mentioned that exact implementation in #879 resulted in poor coverage of the Pareto front. I am interested in finding parametrization where both objective values are higher than 1, so I set the thresholds to 0.9. That being said, I also tried running the same optimization with a threshold set to 0 and had the same issues - poor coverage of the Pareto front and inability to extract the results with
get_pareto_optimal_parameters
. Maybe, the two issues are related. I tried to include all the relevant information in the code snippet below.