This is more of a web application problem than a Kubernetes problem.
The web application must not keep local state. It must not keep important data in memory or write to its (temporary) filesystem. Instead, it must keep relevant data externally, in a database that is shared by all instances. (Compare also https://12factor.net)
Incoming user requests can then be load-balanced across all instances. A user isn't signed into that instance, but into the web application as a whole.
HTTP works very well here because it is a stateless protocol: each request gets its own connection. But some protocols do involve some amount of state, e.g. HTTP with keep-alive, HTTP/2, Websockets, TLS, …. In that case, the connection would be severed if a pod goes down and the client would have to reconnect.
This can be mitigated by load balancing or proxy schemes where the client never connects directly to your web app. Of course this fails if the load balancer goes down, but it likely still increases overall availability.