Can I really trust that the file is not uploaded somewhere?
Yes, and you can verify it in two clicks. Open your browser's developer tools (F12 in most cases), go to the Network panel and start recording. Drop a file in the drop zone and run the hash. You will see zero HTTP requests leaving the browser during the computation. File reading and hashing both happen inside the tab, with no network involvement.
Why are MD5 and SHA-1 marked as legacy algorithms?
For MD5, the first full collision was published in 2004 by Wang, Feng, Lai and Yu, and in 2008 Stevens and others built two X.509 certificates with identical MD5 hashes. Producing an MD5 collision has been within reach of a laptop ever since. For SHA-1, the first practical collision was demonstrated by Google in 2017 with the SHAttered project. In 2020 Leurent and Peyrin published "SHA-1 is a Shambles", a chosen-prefix collision (the most dangerous variety) achievable for around 45,000 USD of rented GPU time: at that point, every attack that was practical against MD5 became practical against SHA-1 too. The consequence is that neither algorithm provides the guarantee they were originally adopted for, and the uses that depend on that guarantee (signatures, authenticated downloads against an adversary, certificates) must move to SHA-256 or stronger.
So they are officially deprecated?
Yes, by different standards bodies and on different timelines. The IETF published RFC 6151 in 2011 declaring that MD5 "must not be used" where collision resistance is required, and that new specifications must not adopt it. NIST officially announced SHA-1 retirement on December 15, 2022, with a transition deadline of December 31, 2030 for FIPS-validated cryptographic modules. The October 2024 public draft of NIST SP 800-131A Rev. 3 formalises the retirement schedule, also covering 224-bit hashes and other dated primitives. In practice, from 2030 FIPS-validated cryptographic modules will no longer be allowed to use SHA-1 for purposes that require collision resistance.
Then why keep them in the tool?
Because legitimate uses still exist: verifying a download against accidental corruption (no adversary), comparing a file's hash against a historical manifest, reading checksums published years ago, working with legacy systems that use them internally for non security-critical identifiers. They stay in the table, at the bottom and with an explicit badge, so that anyone who needs them has the tool, and anyone unfamiliar with their status sees immediately that they belong to a separate category.
Why are BLAKE3, Whirlpool and Tiger not included?
A correct BLAKE3 implementation requires a parallel Merkle-tree structure: mature implementations are compiled, and to stay coherent with the rest of this realm (everything written in browser-inspectable JavaScript) we prefer not to include it until a pure-JS version is available at acceptable cost. Whirlpool and Tiger have become niche, used almost exclusively in specific legacy contexts (Direct Connect, Gnutella, some academic projects): if you need them, dedicated command-line tools already cover those cases.
What about password hashing? Can I use this tool to hash passwords?
No, none of the algorithms here is suitable. Passwords need functions designed to be deliberately slow and memory-hard: argon2id (2026 recommendation), bcrypt (legacy but still valid), scrypt. They live on the server side, applied to a structured input (with a per-user unique salt). A web hash tool is not the right place: in production these belong inside the authentication system, not computed by hand.
My file is 500 MB, what should I do?
Use the command line, it is more efficient because it reads the file in chunks with native progress reporting. On Linux or macOS: sha256sum file.iso or shasum -a 256 file.iso. On Windows in cmd.exe: certutil -hashfile file.iso SHA256. In PowerShell: Get-FileHash file.iso -Algorithm SHA256. For BLAKE2 on Linux: b2sum file.iso.
Why does the same file produce different hashes between this tool and the command line?
For binary files (images, archives, ISOs, executables) the hashes are bit-for-bit identical. When you see differences on textual input, the culprit is usually one of: different line endings (CR/LF/CRLF), an extra newline appended by the shell (echo "text" | sha256sum appends a \n), a UTF-8 BOM present in one file and not the other. When hashing text, make sure you are comparing the exact same byte stream.
What does the size column (16 B, 32 B, 64 B) mean?
It is the raw output length in bytes, before hex encoding. MD5 produces 16 bytes (32 hex characters), SHA-256 produces 32 bytes (64 hex characters), SHA-512 produces 64 bytes (128 hex characters). Hex encoding doubles the textual length to keep the output readable and copyable as a string.
Does it work offline?
Yes. Once the page is loaded you can disconnect from the network: hashing keeps working. All eighteen algorithms run inside the browser, no external API is involved.
Are the hashes I get here compatible with sha256sum or OpenSSL?
Yes, byte for byte. The implementations pass the official RFC and NIST test vectors: SHA-256 produced here matches sha256sum, OpenSSL EVP_sha256, Python hashlib.sha256. SHA-3 matches openssl dgst -sha3-256. BLAKE2b matches b2sum. If you see a mismatch, it is almost always an encoding or line-ending issue with the input.