custom objective function for pyspark
See original GitHub issueHi, I can see that the custom objective function for the Scala API was recently added in this PR, which is really exciting! Is there any idea when this functionally will be added in pyspark (perhaps it has and I haven’t found the PR yet)?
I’m very interested in implementing a custom objective function for the LightGBMRanker model using mean average precision (trying to follow the approach in this paper) which is suited for binary relevance, as the current ‘lambdarank’ function uses NDCG which is best suited for graded relevance measure. It would be nice to have this feature as the xgboost python package has the option to use the rank:map
objective in addition to the default rank:ndcg
.
Thanks so much! We’ve been using your model at our company for the past year, but our training data is binary not graded, and I’d love to use something better suited to our data!
Issue Analytics
- State:
- Created 2 years ago
- Comments:8 (3 by maintainers)
Top GitHub Comments
Is there a way to use custom objective function in pyspark already implemented? When setting a python custom objective function to the fobj argument of LightGBMClassifier, the following error was output.
java.lang.ClassCastException: class net.razorvine.pickle.objects.ClassDictConstructor cannot be cast to class com.microsoft.azure.synapseml.lightgbm.params.FObjTrait
I understand that an error occurred when converting python object to the FObjTrait type.
If there is a way to use your own objective function in pyspark, thank you for giving me a specific example
@andrew-arkhipov it is supported in the scala API, see param here: https://github.com/microsoft/SynapseML/blob/master/lightgbm/src/main/scala/com/microsoft/ml/spark/lightgbm/params/LightGBMParams.scala#L305 see here for param definition: https://github.com/microsoft/SynapseML/blob/master/lightgbm/src/main/scala/com/microsoft/ml/spark/lightgbm/params/FObjParam.scala see example here in scala: https://github.com/microsoft/SynapseML/blob/master/lightgbm/src/test/scala/com/microsoft/ml/spark/lightgbm/split1/VerifyLightGBMClassifier.scala#L338
It’s not yet supported in pyspark because there is no easy way to call the python process from scala worker for an arbitrary function like this. I think I have to look into the interprocess communication code from apache spark to figure out how to enable this scenario.