Add ClusterStartupTaskActor.Execute to protobuff definition
See original GitHub issueThe following message is being displayed in logs when a ClusterStartupTaskActor.Execute
message is being sent through the wire.
Using the default Java serializer for class [com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor$Execute$│
] which is not recommended because of performance implications. Use another serializer or disable this warning using the setting 'akka.actor.warn-about-java-serializer-usage'
There may be other message being used internally by Lagom that are using Java Serializer by default.
This is only visible when deploying in a cluster. Single node instances will never serializer/deserialize messages.
Issue Analytics
- State:
- Created 6 years ago
- Reactions:1
- Comments:11 (11 by maintainers)
Top Results From Across the Web
Getting Started With Grains / Virtual Actors (.NET) - Proto.Actor
Model smart bulbs and a smart house using virtual actors/grains. Run these grains in a cluster of members (nodes). Send messages to and...
Read more >Default configuration - Documentation - Akka
The purpose of reference.conf files is for libraries, like Akka, to define default values that are used if an application doesn't define a...
Read more >Akka.Cluster Best Practices for Continuous Deployment
Setting up Protocol Buffer definitions for all of your core message and event types is tedious the first time you do it, but...
Read more >[Golang] protoactor-go 301: How proto.actor's clustering works ...
The actor processes a corresponding task against the receiving message, updates its mutable state, and then receives the next message. Therefore ...
Read more >Section 3: Create the gRPC Cart service :: Akka Guide
is also a good starting point for learning Akka gRPC. ... initialize and run an HTTP server and the service locally ... Defined...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Akka serialized objects carry a serializer id (an integer), and that id is used to identify which serializer should deserialize it, not the type. So, if
com.example.Foo
is configured to be serialized with the Java serializer on one node, and then it’s sent to another node, it doesn’t matter what that node has configured forcom.example.Foo
, the Java serializer will be used to deserialize it.So the important thing is that before any nodes make a switch to serializing with the new serializer, all nodes have the new serializer. Then when the switch is made, some nodes will still be configured to use the Java serializer, but that doesn’t matter because the serializer will be looked up by the id that comes with the message, which will be the id of the new serializer.
That is correct if all serializers are backported. The migration guide is misleading, but there are a few serializers that haven’t been backported so I’m not sure we can say that it will just work for everyone. Those missing are related to remote deployment so it will not be a problem for Lagom.
A serialized remote message (or persistent event) consists of serializerId, the manifest and the payload. When deserializing that it is only looking at the serializerId to pick which Serializer to use for fromBinary. The message class (the bindings) is not used for deserialization. The manifest is only used within the Serializer to decide how to deserialize the payload, so one Serializer can handle many classes.
A somewhat more safe way is to first use
enable-additional-serialization-bindings = off
on both 2.4 and 2.5 nodes. Rolling upgrade to 2.5 for all nodes. Then switch toon
in another rolling upgrade.