Disclaimer: I do not claim to be a discovery in this article, but only want to help avoid mistakes when creating applications.
We all love the “magic” that a tool like SignalR gives us and are happy to implement it in our projects.
Of course, who would refuse dynamics, instant response to actions and blinking icons with the caption “what is the system doing at the moment and is it worth reloading the page to click it again?” ?
However, here too there is a couple of pitfallswhich my team and I encountered in production.
So, what’s the problem?
I can’t reveal some details, but in short I will say this: we use SignalR for a number of things on the front end, one of which is tracking the status of an asynchronous task that is triggered by a button.
At the moment we give the user task statuses so that he feels comfortable (and does not run to support).
What could go wrong?
Yes, everything is okay, in fact: statuses are sent, the websocket connection is maintained, the task is completed perfectly…
| “I don’t know, everything works locally” © Developer Quote Foundation.
Problem begins exactly at the moment when the role comes into play multiple instances of the application.
Any ideas?
The fact is that when we have 3 instances of the application A1, A2, A3, then each instance only knows about its own connections.
And if, when opening a connection, a request arrives in A1, then with subsequent requests it may end up in a completely different one (A1, A2 or A3).
What does this mean?
That’s right, connection error in the console and a new connection to another instance.
And so on ad infinitum…
What do they write on the Internet?
Of course, I immediately began to study the problem deeper and globally they offered 3 solutions:
-
Don’t use SignalR
-
Use a database to store connections
-
Use a common bus that will connect all instances of the application (attach Redis, for example)
Let’s immediately note the option of not using SignalR and using a database to wait for storing connections (too expensive in terms of maintenance and time).
Let’s focus on option 3 – a common bus with Redis; besides, there are already ready-made libraries for these purposes.
They suggest looking at all this using the example of a simple chat application and attaching a common bus to it so that the application is scalable.
Let’s create a basic application with real-time chat and reproduce the problem:
Backend
To do this, let’s create a .Net Core web application without controllers and call it Chat.api
Let’s connect the library SignalR
<PackageReference Include="Microsoft.AspNet.SignalR.Core" Version="2.4.3" />
Let’s describe a class with fields of a regular message
public class Message
{
public required string UserName { get; set; }
public required string Description { get; set; }
public required long Timestamp { get; set; }
}
Let’s implement the Hub class with one method SendMessage
public class ChatHub: Hub
{
public async Task SendMessage(Message message)
{
await Clients.All.SendAsync("ReceiveMessage", message);
}
}
Frontend
-
Using cli we create the simplest Vue3 application and remove all unnecessary
-
We create the Chat.vue component and put all the logic there (we don’t pay attention to the beauty of the code, because the goal is different)
-
Installing the library
npm i @microsoft/signalr
and in the right place in the code we create a connection:
new HubConnectionBuilder()
.withUrl(`http://localhost:4000/hubs/chat`,
{
headers: { "access-control-allow-origin" : "*"},
})
.configureLogging(LogLevel.Information)
.build();
And we use this connection to interact with the server.
We package the application in Docker-container and customization Nginx so that you can enjoy the work of N-instances of the application.
PS I don’t see the point in describing everything in detail here, for this you can look at the repository at the link https://github.com/mushegovdima/chat
Let’s launch
We launch the Vue3 application directly from the console, and raise Chat.Api
using the command docker-compose up --build --scale chat.api=5
.
Here we see that these 2 clients have connected to different instances and know nothing about each other.
Solution
VCquality of solution we decided to use Redis, which will store the connection states of the entire cluster.
To do this, let’s include the library:
<PackageReference Include="Microsoft.AspNetCore.SignalR.StackExchangeRedis" Version="7.0.18" />
And add the settings to Program.cs indicating the application prefix (this is important)
builder.Services
.AddSignalR()
.AddStackExchangeRedis("host.docker.internal:6379", o => {
o.Configuration.AllowAdmin = true;
o.Configuration.ChannelPrefix = "Chat.Api";
});
*you can add this to the Redis connection parameters in the config file
In the nginx.conf settings we add the necessary parameters for correct interaction:
server {
listen 4000;
location / {
proxy_pass http://chat.api:3001;
proxy_intercept_errors on;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
When creating a connection at the front, we make some changes:
new HubConnectionBuilder()
.withUrl(`http://localhost:4000/hubs/chat`,
{
headers: { "access-control-allow-origin" : "*"},
skipNegotiation: true, <--- new!
transport: HttpTransportType.WebSockets, <--- new!
})
.configureLogging(LogLevel.Information)
.build();
We restart the system: docker-compose up --build --scale chat.api=5
And we get the result where all users receive messages:
Log from Redis (Monitor) when a new user opens the application:
Eventually
We reproduced the problem and found the best solution for our case, but it may not necessarily be the ideal solution for your situation.
Things to add:
-
Microsoft recommends keeping Redis as “close” to applications as possible to avoid data transfer overhead (as we can see, there is quite a lot of interaction with Redis between applications)
-
Try to minimize the amount of data transferred in the message body via Redis
-
It is worth paying special attention to setting up nginx and the optimal number of application instances
-
Under heavy load, build a Redis cluster
Repository – https://github.com/mushegovdima/chat
May be 4th with you.
Contacts
Social: @mushegovdima Email: [email protected]
Acknowledgement and Usage Notice
The editorial team at TechBurst Magazine acknowledges the invaluable contribution of the author of the original article that forms the foundation of our publication. We sincerely appreciate the author’s work. All images in this publication are sourced directly from the original article, where a reference to the author’s profile is provided as well. This publication respects the author’s rights and enhances the visibility of their original work. If there are any concerns or the author wishes to discuss this matter further, we welcome an open dialogue to address potential issues and find an amicable resolution. Feel free to contact us through the ‘Contact Us’ section; the link is available in the website footer.