← Back to Pollis

Security

How Pollis keeps your conversations private — at three levels of detail.

For a friend

Pollis is an app for sending messages — one-on-one, in a small group, or in larger spaces with channels. The thing that makes it different is that we, the people running Pollis, can't read what you write. Most messaging apps could if they wanted to. We can't.

The picture is a sealed envelope, but stronger than the paper kind. Before your message leaves your computer, it's locked inside a digital envelope that only the person you're writing to can open. We see who the envelope is going to and roughly how big it is, but we can't get inside. Even if our offices were broken into, the messages would stay unreadable. Groups work the same way: only the people currently in a group can open its messages. Someone who joins later can read what's said from the moment they joined onward, but not what came before. That's on purpose.

To use Pollis you sign in with your email — we send you a one-time code — and pick a four-digit PIN. The PIN unlocks the app on your computer, and after enough wrong guesses it erases your messages on that device. You also get a long backup code: save it somewhere safe. A password manager works. So does a piece of paper in a drawer. If you ever lose your computer or forget your PIN, that code is the only way back in. We don't keep a copy.

You can use Pollis on more than one device. Sign in on the new one, confirm a code on the old one, and the two agree privately to trust each other. New devices start empty — they don't get your old messages — but from then on, both devices receive everything going forward.

Files like pictures, documents, and voice notes are sealed the same way. Voice chat is the exception: when you talk in a voice channel, your voice passes through our servers in a form we could technically listen to. We don't, and we don't record it, but we won't pretend it's sealed the way text is. Anything you wouldn't want overheard probably shouldn't go over voice.

A few honest trade-offs. Because we can't read your messages, we can't get them back for you if you lose everything. We can see who you talk to and roughly when — that's just how the internet works. If somebody steals your computer while it's already unlocked, they can read your messages, because at that point it thinks they're you. And the privacy is only as strong as the people you're talking to.

That's the whole picture. It works like a normal messaging app, except we built it so we couldn't snoop even if we wanted to or were forced to. The cost is that you have to keep your PIN and backup code somewhere safe — there's no help desk that can let you back in.

How Pollis Keeps Your Conversations Private

Pollis is a desktop chat app for groups. If you have used Slack or Discord at work, or Microsoft Teams, you already know the shape of it: there are groups, there are channels inside groups, there are direct messages between people, there is voice, and there are files. The thing that makes Pollis different from those apps is what happens to the messages once they leave your device. In Slack, Teams, and Discord, the company that runs the service can read every message you send, because the server stores them in plaintext. In Pollis, the server cannot read them, because by the time a message reaches the server it has already been turned into ciphertext that can only be opened by the people you sent it to. This document explains how that actually works, in language that does not require a computer-science background, while still naming the specific cryptography involved so that anyone curious enough to look it up will be looking up the right things.

The starting point for any conversation about security is who you have to trust. With most messaging apps you trust three things implicitly: your own device, the company that built the app, and the company that runs the servers. With Pollis you trust your own device and you trust the version of the app you installed on the day you installed it. After that, you do not have to trust the servers at all. The remote database, the file storage, the voice routing service, the email service that delivers your sign-in code, all of those can be operated by people you have never met, and the design assumes those people are watching. The way this works is that everything sensitive happens on your device, before anything goes anywhere. Your messages are encrypted on the laptop in front of you. Your private keys are generated on the laptop in front of you and never leave it in a form anyone else can read. The server sees who is in which group, who is talking to whom, and how big the messages are, the same way a mail carrier sees the addresses on the envelopes they are carrying. It does not see what is inside the envelope. The difference between a real envelope and a Pollis envelope is that a real envelope can be steamed open by anyone determined enough to try; a Pollis envelope is sealed with mathematics, and the only person who has the matching key is the person it is addressed to.

The encryption Pollis uses for group chat is called MLS, which stands for Messaging Layer Security, and it is described in a public internet standard, RFC 9420. MLS was finalized in 2023 and was designed specifically for the problem of encrypting messages in groups that change over time, where people are added and removed, where people use multiple devices, and where the math has to keep working even when the group has thousands of members. It is the approach that Wire, Cisco Webex, and an increasing share of modern secure-messaging products are converging on. The library Pollis uses to implement it is called openmls, written in Rust, and it uses a particular set of underlying cryptographic algorithms that the MLS standard calls cipher suite one. That cipher suite combines four primitives. There is X25519, an algorithm for two devices to agree on a shared secret without ever transmitting that secret over the network. There is HKDF-SHA256, which takes a shared secret and shapes it into a key of the right size and form for the next step. There is AES-128-GCM, the actual cipher that scrambles the message, and which also includes a built-in tamper-check so that if anyone changes a single bit of the ciphertext in transit, the recipient will know and refuse to decrypt it. And there is Ed25519, a digital signature scheme that lets a device prove a message really came from the device that claims to have sent it. These four pieces, used together, are roughly the same security level as the encryption protecting your bank's website, with the difference that the keys here belong to your devices rather than to a central server.

To understand what is actually being protected, it helps to understand the three layers of identity inside Pollis. The first layer is your account identity. Each user has one long-lived signing key, generated on the device where you first signed up. The mathematics here is Ed25519, which produces a tiny private number that only your device knows, and a corresponding public number that the server publishes for everyone else to see. When other people's devices want to verify that something really came from you, they check it against your public number. The private number, the one that actually proves who you are, never travels. The second layer is your device identity. Each laptop or computer you use Pollis on has its own unique identifier and its own signing key, separate from the account identity. The third layer is what MLS calls a leaf, which is the position your particular device occupies inside a particular group's encrypted state. Each device's leaf is signed by that device's signing key, and that signing key in turn is signed by your account identity key. This nesting is what protects you against a malicious server trying to insert a fake device into your account. If the server tried to slip in a phony device, that device would have no signature from your real account identity, and the other devices in the group would notice.

Signing in to Pollis works differently from a typical app, because of an old problem. In most apps, you type a password every time, and that password is what unlocks your account. Pollis does not have a password, because passwords get reused, get phished, and end up in breach lists. Instead, signing in for the first time on a device involves an email containing a six-digit code, sent through a service called Resend. You type the code back into the app, and that proves you control the email address. From that moment on, on that particular device, you stop using the email code and start using a four-digit PIN. The PIN is local to that one device. It is never sent anywhere, the company that built Pollis has no idea what your PIN is, and there is no way to recover it from the server because the server never knew it. What the PIN does is unlock a small encrypted file in your operating system's keychain. That file contains the actual cryptographic keys your device needs to decrypt your local message database and to sign things on your behalf. Without the PIN, even someone holding your laptop and looking inside the keychain only sees scrambled bytes.

The reason a four-digit PIN is enough to protect anything is a function called Argon2id. Argon2id is the current recommended algorithm for turning a low-entropy thing like a PIN into something hard to brute-force. The way it works is that it deliberately wastes a lot of memory and CPU time on a single attempt. Pollis tunes Argon2id to use sixty-four megabytes of memory and to take roughly a quarter of a second per try. That sounds fast, but consider that an attacker who has stolen a copy of your encrypted keystore would need to run that quarter-second computation for every guess, and there are ten thousand possible four-digit PINs. Combined with a hard cap of ten wrong tries before the encrypted material is deliberately deleted from your device, the math turns four digits, which sounds trivially weak, into something genuinely difficult to defeat. The encryption that wraps the keys themselves uses XChaCha20-Poly1305, a modern stream cipher with a built-in tamper check, chosen specifically because its very long random number used per encryption makes certain rare implementation mistakes impossible.

The PIN protects the device. The Secret Key protects your account. When you first sign up for Pollis, the app shows you a string that looks like A3-XXXXX-XXXXX-XXXXX-XXXXX-XXXXX-XXXXX, six groups of five characters separated by dashes. Those characters come from a thirty-two character alphabet — the digits zero through nine plus the letters of the alphabet, with the letters I, L, O, and U deliberately removed because they are easy to confuse with one and zero. There are thirty random characters in the body, which works out to a hundred and fifty bits of entropy. To put that in perspective, a randomly chosen twenty-character password using upper case, lower case, digits and symbols is roughly a hundred and thirty bits of entropy, and that is already considered far beyond brute-forceable. A hundred and fifty bits is not just impractical to guess but, given the energy that would be required, physically unreasonable to guess. The Secret Key is what allows you to recover your account on a brand new device when you have no other devices logged in. The server stores a copy of your account identity key, but it stores it locked inside an encrypted box, and the only key that opens that box is the Secret Key, which the server does not have. The cryptography here uses HKDF-SHA256 to turn the Secret Key into an unlocking key, and AES-256-GCM to do the actual unwrapping. Pollis shows the Secret Key to you exactly once, and you are expected to write it down somewhere safe, treat it like the deed to a house. If you lose it, and you also lose every device you were ever signed in on, your account is gone. There is no support line that can recover it because there is no support line that has the key to recover it with.

When you want to sign in to Pollis on a second device, two paths exist. The friendlier path is approval by another device. The new device makes up a temporary pair of cryptographic keys, displays a random six-digit verification code, and asks the server to forward a notice to your other devices. One of those other devices shows you the same six-digit code, and asks if it matches. When you confirm, that device performs a calculation called Diffie-Hellman key agreement, using X25519, with the temporary key the new device sent. The result of that calculation is a shared secret that only the two devices know, even though the math happened in plain sight on the server. They use that shared secret, run through HKDF-SHA256 and then AES-256-GCM, to wrap your account identity key, and the new device unwraps it. The six digits you compared are the human-checkable part of the protocol. Even an attacker who could read and write the server's database cannot inject a false approval, because they would not know what code your real device showed you, and the user would notice the mismatch. The fallback path is the Secret Key recovery flow, which is the same idea but uses your Secret Key instead of an existing device. Either way, once a new device has the account identity key, it sets a fresh PIN, publishes its own device certificate signed by the account identity key, and joins every group you are a member of via a feature of MLS called external commit, where it inserts itself into the group's encrypted state using a public snapshot the group leaves behind for exactly this purpose.

The data that lives on your laptop is also encrypted. The local database file, which is where decrypted messages, group state, and various caches are stored, is opened through a system called SQLCipher, which is a fork of the SQLite database that adds page-level encryption. The encryption underneath is AES-256 in CBC mode, with HMAC-SHA512 over each page protecting against tampering. The key that opens the database is a thirty-two byte random number, generated on first sign-up, and stored only in the keystore, only as ciphertext under the PIN-derived key. If somebody copies your hard drive while the app is locked, what they get is a file that is mathematically indistinguishable from random noise. The local database deliberately does not contain things like your contact list or the list of groups you belong to, because those are fetched from the server and do not need to live on disk. This separation means that a stolen laptop, with the PIN unknown to the thief, leaks nothing useful, while the trade-off is that an offline laptop cannot remind you of the names of people you talk to until it gets back online for a moment.

Files and images go through a separate path called convergent encryption. When you upload a photo, the app first computes a fingerprint of the file, called a SHA-256 hash. It then runs that fingerprint through HKDF-SHA256 to derive both the encryption key and the random number used for that one upload. The key encrypts the file using AES-256-GCM, in chunks of four megabytes at a time so that very large files do not have to be loaded into memory all at once. The encrypted blob, plus the fingerprint, is what gets uploaded to Cloudflare R2. The interesting consequence is that two different people uploading the same image end up at the same R2 object, and the system can avoid storing two copies. The interesting cost is that an attacker who already has a candidate image, in plaintext, can compute the same fingerprint and ask the server whether anyone has uploaded that exact image. They cannot read it, but they can confirm or deny its presence. This is the same trade-off used by services like MEGA. Pollis accepts it as the price of cross-user deduplication; if a future audit decides the trade-off is wrong, it can be replaced with per-conversation keys at the cost of dedup.

There is one significant exception to the end-to-end encryption story, and it is voice. When you join a voice channel, your microphone audio is encrypted between your device and a server called LiveKit, using the same DTLS-SRTP that every WebRTC application uses, including Slack Huddles, Microsoft Teams, and Google Meet. But LiveKit is what is called a selective forwarding unit, an SFU. It receives audio from each participant, decrypts it, and forwards it to the other participants, encrypting it again on the way out. That means the LiveKit server, in the moment, sees plaintext audio. It does not store the audio, and it has no business recording it, but a sufficiently determined operator could. The MLS protocol does in principle allow voice frames to be encrypted with group keys so that the SFU never sees plaintext, and LiveKit's library exposes the hooks needed to do this. Pollis does not currently turn that feature on. This is the largest and most honest gap between the messaging side of the app, where the server provably cannot eavesdrop, and the voice side, where the server can. It is worth noting that Discord, since September 2024, has shipped a protocol called DAVE that adds end-to-end encryption to its voice and video, so on this specific axis Discord is now ahead of Pollis. Anyone evaluating Pollis for use in a setting where voice eavesdropping is part of the threat model should know about this difference.

There are several other limitations worth being explicit about. The bearer tokens that the desktop app uses to talk to Turso, Cloudflare R2, LiveKit, and Resend are baked into the app binary itself, which is common for desktop apps that ship without a per-user authentication service in front of their backend providers. Anyone willing to take apart the app can extract those credentials and open a database connection equivalent to any client. They still cannot decrypt anything because of MLS, but they can read metadata at the level the server already sees. The verification of cross-signing certificates on inbound MLS commits currently logs a loud warning rather than refusing the commit, because refusing would freeze the local copy of the group at an old state while the rest of the world moved on; closing this gap requires a more complex catch-up protocol that is on the roadmap. There is no automatic periodic rotation of MLS group keys when nothing else is happening in a group, which means that the post-compromise healing properties depend on the group being actively used. There is no backup of message history of any kind: a brand-new device installed three months from now will not be able to read messages from before it joined, because the cryptography MLS uses, called TreeKEM, is built around forward secrecy that makes that genuinely impossible without a backup mechanism, and the product principle Pollis is built on says no backup mechanism. If you forget your PIN and have lost your Secret Key, your account is unrecoverable. These are not bugs. They are deliberate trade-offs that prefer privacy over convenience in the small number of cases where the two collide.

Compared to apps you have probably used, the picture is roughly this. Signal and WhatsApp encrypt your messages end to end the same way Pollis does, but they use the older Signal Protocol with X3DH and the Double Ratchet, designed in 2014; Pollis uses the newer MLS, which is better suited to large groups and to people with multiple devices. Slack and Microsoft Teams do not encrypt your messages end to end at all; the server reads everything. Enterprise add-ons such as Slack Enterprise Key Management and Microsoft 365 Customer Key let large customers control the encryption keys used at rest, but those features are about key custody, not end-to-end encryption — the server still has access to plaintext for indexing, search, and compliance. Discord is in the Slack camp for messages — the server reads chat — but its 2024 DAVE protocol does provide end-to-end encryption for voice and video, which is more than Pollis currently does for voice. Element and Matrix encrypt their group chat, but with a different system called Megolm, which trades some of the post-compromise security MLS provides for the ability to back up message history through the server, which Pollis intentionally does not do. iMessage encrypts messages but pairwise rather than at the group level, and by default backs up to iCloud in a form Apple holds the keys to; users who turn on Apple's Advanced Data Protection get an end-to-end encrypted backup, but it's an opt-in. The closest analog to Pollis in cryptographic shape is Wire, which also uses MLS through OpenMLS, in roughly the same configuration. The closest analog to Pollis in product shape is Slack, with the difference that the server cannot read your messages.

The summary, in plain words, is this. Pollis encrypts your messages on your computer, with keys that exist only on your computers, before sending them anywhere, using a modern public standard called MLS that is broadly considered the right way to do this in 2026. Your local data is encrypted on disk under a key that itself is encrypted under your PIN. Your account is recoverable to a new device only by an existing device of yours, or by a one-hundred-and-fifty-bit Secret Key that only you have, and is unrecoverable otherwise. Files are encrypted before they leave your computer. Voice is not, and shares its trust model with Slack Huddles. The full technical writeup, with citations to the specific RFCs and parameter choices, lives alongside this document for anyone who wants to verify the claims rather than take them on faith.

Security Whitepaper

The full technical whitepaper is the most detailed document Pollis publishes about its security model. It is intended for security engineers, cryptographers, and independent reviewers who want to verify the claims on the other two tabs against the implementation, rather than read a narrative.

It documents the threat model, identity and key material, transport and at-rest encryption, MLS group encryption, DM and channel semantics, auth and session, server-trust assumptions, known gaps and deferred mitigations, and a worked example tracing a full new-user signup through their first message send. It is around five hundred lines of prose plus citations, and is kept in the repository so reviewers can read it at the same revision as the code.

The document lives on GitHub, alongside the source it describes:

Read the whitepaper on GitHub →

Pinned to main. The repo's revision history captures every change.