aboutsummaryrefslogtreecommitdiff
path: root/core
diff options
context:
space:
mode:
authorgingerBill <gingerBill@users.noreply.github.com>2024-02-06 17:40:45 +0000
committerGitHub <noreply@github.com>2024-02-06 17:40:45 +0000
commit79173ef119aff03dd4beccf582efe08303ada18a (patch)
treebd4c251900d5d7908d8f940623245a3f5f8195df /core
parent1f0b24b7359ed1c43228d0d1a18538162a7c0b85 (diff)
parent44758f2a6035803e504a06ec1d6b47f6336bb8cb (diff)
Merge pull request #3136 from Yawning/feature/crypto-hash
core:crypto/hash - Add and refactor
Diffstat (limited to 'core')
-rw-r--r--core/crypto/README.md78
-rw-r--r--core/crypto/_blake2/blake2.odin97
-rw-r--r--core/crypto/_sha3/sha3.odin111
-rw-r--r--core/crypto/blake2b/blake2b.odin132
-rw-r--r--core/crypto/blake2s/blake2s.odin132
-rw-r--r--core/crypto/hash/doc.odin62
-rw-r--r--core/crypto/hash/hash.odin116
-rw-r--r--core/crypto/hash/low_level.odin353
-rw-r--r--core/crypto/hmac/hmac.odin162
-rw-r--r--core/crypto/legacy/keccak/keccak.odin390
-rw-r--r--core/crypto/legacy/md5/md5.odin148
-rw-r--r--core/crypto/legacy/sha1/sha1.odin151
-rw-r--r--core/crypto/poly1305/poly1305.odin4
-rw-r--r--core/crypto/sha2/sha2.odin513
-rw-r--r--core/crypto/sha3/sha3.odin380
-rw-r--r--core/crypto/shake/shake.odin218
-rw-r--r--core/crypto/sm3/sm3.odin145
17 files changed, 1278 insertions, 1914 deletions
diff --git a/core/crypto/README.md b/core/crypto/README.md
index adb815df4..1e4e41fb8 100644
--- a/core/crypto/README.md
+++ b/core/crypto/README.md
@@ -1,84 +1,22 @@
# crypto
-A cryptography library for the Odin language
+A cryptography library for the Odin language.
## Supported
-This library offers various algorithms implemented in Odin.
-Please see the chart below for some of the options.
-
-## Hashing algorithms
-
-| Algorithm | |
-|:-------------------------------------------------------------------------------------------------------------|:-----------------|
-| [BLAKE2B](https://datatracker.ietf.org/doc/html/rfc7693) | &#10004;&#65039; |
-| [BLAKE2S](https://datatracker.ietf.org/doc/html/rfc7693) | &#10004;&#65039; |
-| [SHA-2](https://csrc.nist.gov/csrc/media/publications/fips/180/2/archive/2002-08-01/documents/fips180-2.pdf) | &#10004;&#65039; |
-| [SHA-3](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf) | &#10004;&#65039; |
-| [SHAKE](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf) | &#10004;&#65039; |
-| [SM3](https://datatracker.ietf.org/doc/html/draft-sca-cfrg-sm3-02) | &#10004;&#65039; |
-| legacy/[Keccak](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf) | &#10004;&#65039; |
-| legacy/[MD5](https://datatracker.ietf.org/doc/html/rfc1321) | &#10004;&#65039; |
-| legacy/[SHA-1](https://datatracker.ietf.org/doc/html/rfc3174) | &#10004;&#65039; |
-
-#### High level API
-
-Each hash algorithm contains a procedure group named `hash`, or if the algorithm provides more than one digest size `hash_<size>`\*.
-Included in these groups are six procedures.
-- `hash_string` - Hash a given string and return the computed hash. Just calls `hash_bytes` internally
-- `hash_bytes` - Hash a given byte slice and return the computed hash
-- `hash_string_to_buffer` - Hash a given string and put the computed hash in the second proc parameter. Just calls `hash_bytes_to_buffer` internally
-- `hash_bytes_to_buffer` - Hash a given string and put the computed hash in the second proc parameter. The destination buffer has to be at least as big as the digest size of the hash
-- `hash_stream` - Takes a stream from io.Stream and returns the computed hash from it
-- `hash_file` - Takes a file handle and returns the computed hash from it. A second optional boolean parameter controls if the file is streamed (this is the default) or read at once (set to true)
-
-\* On some algorithms there is another part to the name, since they might offer control about additional parameters.
-For instance, `SHA-2` offers different sizes.
-Computing a 512-bit hash is therefore achieved by calling `sha2.hash_512(...)`.
-
-#### Low level API
-
-The above mentioned procedures internally call three procedures: `init`, `update` and `final`.
-You may also directly call them, if you wish.
-
-#### Example
-
-```odin
-package crypto_example
-
-// Import the desired package
-import "core:crypto/blake2b"
-
-main :: proc() {
- input := "foo"
-
- // Compute the hash, using the high level API
- computed_hash := blake2b.hash(input)
-
- // Variant that takes a destination buffer, instead of returning the computed hash
- hash := make([]byte, sha2.DIGEST_SIZE) // @note: Destination buffer has to be at least as big as the digest size of the hash
- blake2b.hash(input, hash[:])
-
- // Compute the hash, using the low level API
- ctx: blake2b.Context
- computed_hash_low: [blake2b.DIGEST_SIZE]byte
- blake2b.init(&ctx)
- blake2b.update(&ctx, transmute([]byte)input)
- blake2b.final(&ctx, computed_hash_low[:])
-}
-```
-For example uses of all available algorithms, please see the tests within `tests/core/crypto`.
+This package offers various algorithms implemented in Odin, along with
+useful helpers such as access to the system entropy source, and a
+constant-time byte comparison.
## Implementation considerations
- The crypto packages are not thread-safe.
- Best-effort is make to mitigate timing side-channels on reasonable
- architectures. Architectures that are known to be unreasonable include
+ architectures. Architectures that are known to be unreasonable include
but are not limited to i386, i486, and WebAssembly.
-- Some but not all of the packages attempt to santize sensitive data,
- however this is not done consistently through the library at the moment.
- As Thomas Pornin puts it "In general, such memory cleansing is a fool's
- quest."
+- The packages attempt to santize sensitive data, however this is, and
+ will remain a "best-effort" implementation decision. As Thomas Pornin
+ puts it "In general, such memory cleansing is a fool's quest."
- All of these packages have not received independent third party review.
## License
diff --git a/core/crypto/_blake2/blake2.odin b/core/crypto/_blake2/blake2.odin
index 13b58dba9..2ad74843b 100644
--- a/core/crypto/_blake2/blake2.odin
+++ b/core/crypto/_blake2/blake2.odin
@@ -11,6 +11,7 @@ package _blake2
*/
import "core:encoding/endian"
+import "core:mem"
BLAKE2S_BLOCK_SIZE :: 64
BLAKE2S_SIZE :: 32
@@ -28,7 +29,6 @@ Blake2s_Context :: struct {
is_keyed: bool,
size: byte,
is_last_node: bool,
- cfg: Blake2_Config,
is_initialized: bool,
}
@@ -44,7 +44,6 @@ Blake2b_Context :: struct {
is_keyed: bool,
size: byte,
is_last_node: bool,
- cfg: Blake2_Config,
is_initialized: bool,
}
@@ -83,62 +82,61 @@ BLAKE2B_IV := [8]u64 {
0x1f83d9abfb41bd6b, 0x5be0cd19137e2179,
}
-init :: proc(ctx: ^$T) {
+init :: proc(ctx: ^$T, cfg: ^Blake2_Config) {
when T == Blake2s_Context {
- block_size :: BLAKE2S_BLOCK_SIZE
max_size :: BLAKE2S_SIZE
} else when T == Blake2b_Context {
- block_size :: BLAKE2B_BLOCK_SIZE
max_size :: BLAKE2B_SIZE
}
- if ctx.cfg.size > max_size {
+ if cfg.size > max_size {
panic("blake2: requested output size exceeeds algorithm max")
}
- p := make([]byte, block_size)
- defer delete(p)
+ // To save having to allocate a scratch buffer, use the internal
+ // data buffer (`ctx.x`), as it is exactly the correct size.
+ p := ctx.x[:]
- p[0] = ctx.cfg.size
- p[1] = byte(len(ctx.cfg.key))
+ p[0] = cfg.size
+ p[1] = byte(len(cfg.key))
- if ctx.cfg.salt != nil {
+ if cfg.salt != nil {
when T == Blake2s_Context {
- copy(p[16:], ctx.cfg.salt)
+ copy(p[16:], cfg.salt)
} else when T == Blake2b_Context {
- copy(p[32:], ctx.cfg.salt)
+ copy(p[32:], cfg.salt)
}
}
- if ctx.cfg.person != nil {
+ if cfg.person != nil {
when T == Blake2s_Context {
- copy(p[24:], ctx.cfg.person)
+ copy(p[24:], cfg.person)
} else when T == Blake2b_Context {
- copy(p[48:], ctx.cfg.person)
+ copy(p[48:], cfg.person)
}
}
- if ctx.cfg.tree != nil {
- p[2] = ctx.cfg.tree.(Blake2_Tree).fanout
- p[3] = ctx.cfg.tree.(Blake2_Tree).max_depth
- endian.unchecked_put_u32le(p[4:], ctx.cfg.tree.(Blake2_Tree).leaf_size)
+ if cfg.tree != nil {
+ p[2] = cfg.tree.(Blake2_Tree).fanout
+ p[3] = cfg.tree.(Blake2_Tree).max_depth
+ endian.unchecked_put_u32le(p[4:], cfg.tree.(Blake2_Tree).leaf_size)
when T == Blake2s_Context {
- p[8] = byte(ctx.cfg.tree.(Blake2_Tree).node_offset)
- p[9] = byte(ctx.cfg.tree.(Blake2_Tree).node_offset >> 8)
- p[10] = byte(ctx.cfg.tree.(Blake2_Tree).node_offset >> 16)
- p[11] = byte(ctx.cfg.tree.(Blake2_Tree).node_offset >> 24)
- p[12] = byte(ctx.cfg.tree.(Blake2_Tree).node_offset >> 32)
- p[13] = byte(ctx.cfg.tree.(Blake2_Tree).node_offset >> 40)
- p[14] = ctx.cfg.tree.(Blake2_Tree).node_depth
- p[15] = ctx.cfg.tree.(Blake2_Tree).inner_hash_size
+ p[8] = byte(cfg.tree.(Blake2_Tree).node_offset)
+ p[9] = byte(cfg.tree.(Blake2_Tree).node_offset >> 8)
+ p[10] = byte(cfg.tree.(Blake2_Tree).node_offset >> 16)
+ p[11] = byte(cfg.tree.(Blake2_Tree).node_offset >> 24)
+ p[12] = byte(cfg.tree.(Blake2_Tree).node_offset >> 32)
+ p[13] = byte(cfg.tree.(Blake2_Tree).node_offset >> 40)
+ p[14] = cfg.tree.(Blake2_Tree).node_depth
+ p[15] = cfg.tree.(Blake2_Tree).inner_hash_size
} else when T == Blake2b_Context {
- endian.unchecked_put_u64le(p[8:], ctx.cfg.tree.(Blake2_Tree).node_offset)
- p[16] = ctx.cfg.tree.(Blake2_Tree).node_depth
- p[17] = ctx.cfg.tree.(Blake2_Tree).inner_hash_size
+ endian.unchecked_put_u64le(p[8:], cfg.tree.(Blake2_Tree).node_offset)
+ p[16] = cfg.tree.(Blake2_Tree).node_depth
+ p[17] = cfg.tree.(Blake2_Tree).inner_hash_size
}
} else {
p[2], p[3] = 1, 1
}
- ctx.size = ctx.cfg.size
+ ctx.size = cfg.size
for i := 0; i < 8; i += 1 {
when T == Blake2s_Context {
ctx.h[i] = BLAKE2S_IV[i] ~ endian.unchecked_get_u32le(p[i * 4:])
@@ -147,11 +145,14 @@ init :: proc(ctx: ^$T) {
ctx.h[i] = BLAKE2B_IV[i] ~ endian.unchecked_get_u64le(p[i * 8:])
}
}
- if ctx.cfg.tree != nil && ctx.cfg.tree.(Blake2_Tree).is_last_node {
+
+ mem.zero(&ctx.x, size_of(ctx.x)) // Done with the scratch space, no barrier.
+
+ if cfg.tree != nil && cfg.tree.(Blake2_Tree).is_last_node {
ctx.is_last_node = true
}
- if len(ctx.cfg.key) > 0 {
- copy(ctx.padded_key[:], ctx.cfg.key)
+ if len(cfg.key) > 0 {
+ copy(ctx.padded_key[:], cfg.key)
update(ctx, ctx.padded_key[:])
ctx.is_keyed = true
}
@@ -194,22 +195,40 @@ update :: proc(ctx: ^$T, p: []byte) {
ctx.nx += copy(ctx.x[ctx.nx:], p)
}
-final :: proc(ctx: ^$T, hash: []byte) {
+final :: proc(ctx: ^$T, hash: []byte, finalize_clone: bool = false) {
assert(ctx.is_initialized)
+ ctx := ctx
+ if finalize_clone {
+ tmp_ctx: T
+ clone(&tmp_ctx, ctx)
+ ctx = &tmp_ctx
+ }
+ defer(reset(ctx))
+
when T == Blake2s_Context {
- if len(hash) < int(ctx.cfg.size) {
+ if len(hash) < int(ctx.size) {
panic("crypto/blake2s: invalid destination digest size")
}
blake2s_final(ctx, hash)
} else when T == Blake2b_Context {
- if len(hash) < int(ctx.cfg.size) {
+ if len(hash) < int(ctx.size) {
panic("crypto/blake2b: invalid destination digest size")
}
blake2b_final(ctx, hash)
}
+}
+
+clone :: proc(ctx, other: ^$T) {
+ ctx^ = other^
+}
+
+reset :: proc(ctx: ^$T) {
+ if !ctx.is_initialized {
+ return
+ }
- ctx.is_initialized = false
+ mem.zero_explicit(ctx, size_of(ctx^))
}
@(private)
diff --git a/core/crypto/_sha3/sha3.odin b/core/crypto/_sha3/sha3.odin
index 43af0ad75..6779c9770 100644
--- a/core/crypto/_sha3/sha3.odin
+++ b/core/crypto/_sha3/sha3.odin
@@ -12,10 +12,16 @@ package _sha3
*/
import "core:math/bits"
+import "core:mem"
ROUNDS :: 24
-Sha3_Context :: struct {
+RATE_224 :: 1152 / 8
+RATE_256 :: 1088 / 8
+RATE_384 :: 832 / 8
+RATE_512 :: 576 / 8
+
+Context :: struct {
st: struct #raw_union {
b: [200]u8,
q: [25]u64,
@@ -103,81 +109,100 @@ keccakf :: proc "contextless" (st: ^[25]u64) {
}
}
-init :: proc(c: ^Sha3_Context) {
+init :: proc(ctx: ^Context) {
for i := 0; i < 25; i += 1 {
- c.st.q[i] = 0
+ ctx.st.q[i] = 0
}
- c.rsiz = 200 - 2 * c.mdlen
- c.pt = 0
+ ctx.rsiz = 200 - 2 * ctx.mdlen
+ ctx.pt = 0
- c.is_initialized = true
- c.is_finalized = false
+ ctx.is_initialized = true
+ ctx.is_finalized = false
}
-update :: proc(c: ^Sha3_Context, data: []byte) {
- assert(c.is_initialized)
- assert(!c.is_finalized)
+update :: proc(ctx: ^Context, data: []byte) {
+ assert(ctx.is_initialized)
+ assert(!ctx.is_finalized)
- j := c.pt
+ j := ctx.pt
for i := 0; i < len(data); i += 1 {
- c.st.b[j] ~= data[i]
+ ctx.st.b[j] ~= data[i]
j += 1
- if j >= c.rsiz {
- keccakf(&c.st.q)
+ if j >= ctx.rsiz {
+ keccakf(&ctx.st.q)
j = 0
}
}
- c.pt = j
+ ctx.pt = j
}
-final :: proc(c: ^Sha3_Context, hash: []byte) {
- assert(c.is_initialized)
+final :: proc(ctx: ^Context, hash: []byte, finalize_clone: bool = false) {
+ assert(ctx.is_initialized)
- if len(hash) < c.mdlen {
- if c.is_keccak {
+ if len(hash) < ctx.mdlen {
+ if ctx.is_keccak {
panic("crypto/keccac: invalid destination digest size")
}
panic("crypto/sha3: invalid destination digest size")
}
- if c.is_keccak {
- c.st.b[c.pt] ~= 0x01
+
+ ctx := ctx
+ if finalize_clone {
+ tmp_ctx: Context
+ clone(&tmp_ctx, ctx)
+ ctx = &tmp_ctx
+ }
+ defer(reset(ctx))
+
+ if ctx.is_keccak {
+ ctx.st.b[ctx.pt] ~= 0x01
} else {
- c.st.b[c.pt] ~= 0x06
+ ctx.st.b[ctx.pt] ~= 0x06
}
- c.st.b[c.rsiz - 1] ~= 0x80
- keccakf(&c.st.q)
- for i := 0; i < c.mdlen; i += 1 {
- hash[i] = c.st.b[i]
+ ctx.st.b[ctx.rsiz - 1] ~= 0x80
+ keccakf(&ctx.st.q)
+ for i := 0; i < ctx.mdlen; i += 1 {
+ hash[i] = ctx.st.b[i]
+ }
+}
+
+clone :: proc(ctx, other: ^Context) {
+ ctx^ = other^
+}
+
+reset :: proc(ctx: ^Context) {
+ if !ctx.is_initialized {
+ return
}
- c.is_initialized = false // No more absorb, no more squeeze.
+ mem.zero_explicit(ctx, size_of(ctx^))
}
-shake_xof :: proc(c: ^Sha3_Context) {
- assert(c.is_initialized)
- assert(!c.is_finalized)
+shake_xof :: proc(ctx: ^Context) {
+ assert(ctx.is_initialized)
+ assert(!ctx.is_finalized)
- c.st.b[c.pt] ~= 0x1F
- c.st.b[c.rsiz - 1] ~= 0x80
- keccakf(&c.st.q)
- c.pt = 0
+ ctx.st.b[ctx.pt] ~= 0x1F
+ ctx.st.b[ctx.rsiz - 1] ~= 0x80
+ keccakf(&ctx.st.q)
+ ctx.pt = 0
- c.is_finalized = true // No more absorb, unlimited squeeze.
+ ctx.is_finalized = true // No more absorb, unlimited squeeze.
}
-shake_out :: proc(c: ^Sha3_Context, hash: []byte) {
- assert(c.is_initialized)
- assert(c.is_finalized)
+shake_out :: proc(ctx: ^Context, hash: []byte) {
+ assert(ctx.is_initialized)
+ assert(ctx.is_finalized)
- j := c.pt
+ j := ctx.pt
for i := 0; i < len(hash); i += 1 {
- if j >= c.rsiz {
- keccakf(&c.st.q)
+ if j >= ctx.rsiz {
+ keccakf(&ctx.st.q)
j = 0
}
- hash[i] = c.st.b[j]
+ hash[i] = ctx.st.b[j]
j += 1
}
- c.pt = j
+ ctx.pt = j
}
diff --git a/core/crypto/blake2b/blake2b.odin b/core/crypto/blake2b/blake2b.odin
index 17657311e..384c2ffea 100644
--- a/core/crypto/blake2b/blake2b.odin
+++ b/core/crypto/blake2b/blake2b.odin
@@ -1,3 +1,10 @@
+/*
+package blake2b implements the BLAKE2b hash algorithm.
+
+See:
+- https://datatracker.ietf.org/doc/html/rfc7693
+- https://www.blake2.net
+*/
package blake2b
/*
@@ -6,122 +13,47 @@ package blake2b
List of contributors:
zhibog, dotbmp: Initial implementation.
-
- Interface for the BLAKE2b hashing algorithm.
- BLAKE2b and BLAKE2s share the implementation in the _blake2 package.
*/
-import "core:io"
-import "core:os"
-
import "../_blake2"
-/*
- High level API
-*/
-
+// DIGEST_SIZE is the BLAKE2b digest size in bytes.
DIGEST_SIZE :: 64
-// hash_string will hash the given input and return the
-// computed hash
-hash_string :: proc(data: string) -> [DIGEST_SIZE]byte {
- return hash_bytes(transmute([]byte)(data))
-}
+// BLOCK_SIZE is the BLAKE2b block size in bytes.
+BLOCK_SIZE :: _blake2.BLAKE2B_BLOCK_SIZE
-// hash_bytes will hash the given input and return the
-// computed hash
-hash_bytes :: proc(data: []byte) -> [DIGEST_SIZE]byte {
- hash: [DIGEST_SIZE]byte
- ctx: Context
- cfg: _blake2.Blake2_Config
- cfg.size = _blake2.BLAKE2B_SIZE
- ctx.cfg = cfg
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer :: proc(data, hash: []byte) {
- ctx: Context
- cfg: _blake2.Blake2_Config
- cfg.size = _blake2.BLAKE2B_SIZE
- ctx.cfg = cfg
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
+// Context is a BLAKE2b instance.
+Context :: _blake2.Blake2b_Context
-// hash_stream will read the stream in chunks and compute a
-// hash from its contents
-hash_stream :: proc(s: io.Stream) -> ([DIGEST_SIZE]byte, bool) {
- hash: [DIGEST_SIZE]byte
- ctx: Context
+// init initializes a Context with the default BLAKE2b config.
+init :: proc(ctx: ^Context) {
cfg: _blake2.Blake2_Config
cfg.size = _blake2.BLAKE2B_SIZE
- ctx.cfg = cfg
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
+ _blake2.init(ctx, &cfg)
}
-// hash_file will read the file provided by the given handle
-// and compute a hash
-hash_file :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE]byte, bool) {
- if !load_at_once {
- return hash_stream(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes(buf[:]), ok
- }
- }
- return [DIGEST_SIZE]byte{}, false
+// update adds more data to the Context.
+update :: proc(ctx: ^Context, data: []byte) {
+ _blake2.update(ctx, data)
}
-hash :: proc {
- hash_stream,
- hash_file,
- hash_bytes,
- hash_string,
- hash_bytes_to_buffer,
- hash_string_to_buffer,
+// final finalizes the Context, writes the digest to hash, and calls
+// reset on the Context.
+//
+// Iff finalize_clone is set, final will work on a copy of the Context,
+// which is useful for for calculating rolling digests.
+final :: proc(ctx: ^Context, hash: []byte, finalize_clone: bool = false) {
+ _blake2.final(ctx, hash, finalize_clone)
}
-/*
- Low level API
-*/
-
-Context :: _blake2.Blake2b_Context
-
-init :: proc(ctx: ^Context) {
- _blake2.init(ctx)
-}
-
-update :: proc(ctx: ^Context, data: []byte) {
- _blake2.update(ctx, data)
+// clone clones the Context other into ctx.
+clone :: proc(ctx, other: ^Context) {
+ _blake2.clone(ctx, other)
}
-final :: proc(ctx: ^Context, hash: []byte) {
- _blake2.final(ctx, hash)
+// reset sanitizes the Context. The Context must be re-initialized to
+// be used again.
+reset :: proc(ctx: ^Context) {
+ _blake2.reset(ctx)
}
diff --git a/core/crypto/blake2s/blake2s.odin b/core/crypto/blake2s/blake2s.odin
index 2da619bb8..1ba9bef2d 100644
--- a/core/crypto/blake2s/blake2s.odin
+++ b/core/crypto/blake2s/blake2s.odin
@@ -1,3 +1,10 @@
+/*
+package blake2s implements the BLAKE2s hash algorithm.
+
+See:
+- https://datatracker.ietf.org/doc/html/rfc7693
+- https://www.blake2.net/
+*/
package blake2s
/*
@@ -6,122 +13,47 @@ package blake2s
List of contributors:
zhibog, dotbmp: Initial implementation.
-
- Interface for the BLAKE2s hashing algorithm.
- BLAKE2s and BLAKE2b share the implementation in the _blake2 package.
*/
-import "core:io"
-import "core:os"
-
import "../_blake2"
-/*
- High level API
-*/
-
+// DIGEST_SIZE is the BLAKE2s digest size in bytes.
DIGEST_SIZE :: 32
-// hash_string will hash the given input and return the
-// computed hash
-hash_string :: proc(data: string) -> [DIGEST_SIZE]byte {
- return hash_bytes(transmute([]byte)(data))
-}
+// BLOCK_SIZE is the BLAKE2s block size in bytes.
+BLOCK_SIZE :: _blake2.BLAKE2S_BLOCK_SIZE
-// hash_bytes will hash the given input and return the
-// computed hash
-hash_bytes :: proc(data: []byte) -> [DIGEST_SIZE]byte {
- hash: [DIGEST_SIZE]byte
- ctx: Context
- cfg: _blake2.Blake2_Config
- cfg.size = _blake2.BLAKE2S_SIZE
- ctx.cfg = cfg
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer :: proc(data, hash: []byte) {
- ctx: Context
- cfg: _blake2.Blake2_Config
- cfg.size = _blake2.BLAKE2S_SIZE
- ctx.cfg = cfg
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
+// Context is a BLAKE2s instance.
+Context :: _blake2.Blake2s_Context
-// hash_stream will read the stream in chunks and compute a
-// hash from its contents
-hash_stream :: proc(s: io.Stream) -> ([DIGEST_SIZE]byte, bool) {
- hash: [DIGEST_SIZE]byte
- ctx: Context
+// init initializes a Context with the default BLAKE2s config.
+init :: proc(ctx: ^Context) {
cfg: _blake2.Blake2_Config
cfg.size = _blake2.BLAKE2S_SIZE
- ctx.cfg = cfg
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
+ _blake2.init(ctx, &cfg)
}
-// hash_file will read the file provided by the given handle
-// and compute a hash
-hash_file :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE]byte, bool) {
- if !load_at_once {
- return hash_stream(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes(buf[:]), ok
- }
- }
- return [DIGEST_SIZE]byte{}, false
+// update adds more data to the Context.
+update :: proc(ctx: ^Context, data: []byte) {
+ _blake2.update(ctx, data)
}
-hash :: proc {
- hash_stream,
- hash_file,
- hash_bytes,
- hash_string,
- hash_bytes_to_buffer,
- hash_string_to_buffer,
+// final finalizes the Context, writes the digest to hash, and calls
+// reset on the Context.
+//
+// Iff finalize_clone is set, final will work on a copy of the Context,
+// which is useful for for calculating rolling digests.
+final :: proc(ctx: ^Context, hash: []byte, finalize_clone: bool = false) {
+ _blake2.final(ctx, hash, finalize_clone)
}
-/*
- Low level API
-*/
-
-Context :: _blake2.Blake2s_Context
-
-init :: proc(ctx: ^Context) {
- _blake2.init(ctx)
-}
-
-update :: proc(ctx: ^Context, data: []byte) {
- _blake2.update(ctx, data)
+// clone clones the Context other into ctx.
+clone :: proc(ctx, other: ^Context) {
+ _blake2.clone(ctx, other)
}
-final :: proc(ctx: ^Context, hash: []byte) {
- _blake2.final(ctx, hash)
+// reset sanitizes the Context. The Context must be re-initialized to
+// be used again.
+reset :: proc(ctx: ^Context) {
+ _blake2.reset(ctx)
}
diff --git a/core/crypto/hash/doc.odin b/core/crypto/hash/doc.odin
new file mode 100644
index 000000000..d50908b94
--- /dev/null
+++ b/core/crypto/hash/doc.odin
@@ -0,0 +1,62 @@
+/*
+package hash provides a generic interface to the supported hash algorithms.
+
+A high-level convenience procedure group `hash` is provided to easily
+accomplish common tasks.
+- `hash_string` - Hash a given string and return the digest.
+- `hash_bytes` - Hash a given byte slice and return the digest.
+- `hash_string_to_buffer` - Hash a given string and put the digest in
+ the third parameter. It requires that the destination buffer
+ is at least as big as the digest size.
+- `hash_bytes_to_buffer` - Hash a given string and put the computed
+ digest in the third parameter. It requires that the destination
+ buffer is at least as big as the digest size.
+- `hash_stream` - Incrementally fully consume a `io.Stream`, and return
+ the computed digest.
+- `hash_file` - Takes a file handle and returns the computed digest.
+ A third optional boolean parameter controls if the file is streamed
+ (default), or or read at once.
+
+```odin
+package hash_example
+
+import "core:crypto/hash"
+
+main :: proc() {
+ input := "Feed the fire."
+
+ // Compute the digest, using the high level API.
+ returned_digest := hash.hash(hash.Algorithm.SHA512_256, input)
+ defer delete(returned_digest)
+
+ // Variant that takes a destination buffer, instead of returning
+ // the digest.
+ digest := make([]byte, hash.DIGEST_SIZES[hash.Algorithm.BLAKE2B]) // @note: Destination buffer has to be at least as big as the digest size of the hash.
+ defer delete(digest)
+ hash.hash(hash.Algorithm.BLAKE2B, input, digest)
+}
+```
+
+A generic low level API is provided supporting the init/update/final interface
+that is typical with cryptographic hash function implementations.
+
+```odin
+package hash_example
+
+import "core:crypto/hash"
+
+main :: proc() {
+ input := "Let the cinders burn."
+
+ // Compute the digest, using the low level API.
+ ctx: hash.Context
+ digest := make([]byte, hash.DIGEST_SIZES[hash.Algorithm.SHA3_512])
+ defer delete(digest)
+
+ hash.init(&ctx, hash.Algorithm.SHA3_512)
+ hash.update(&ctx, transmute([]byte)input)
+ hash.final(&ctx, digest)
+}
+```
+*/
+package crypto_hash \ No newline at end of file
diff --git a/core/crypto/hash/hash.odin b/core/crypto/hash/hash.odin
new file mode 100644
index 000000000..e4b3d4be1
--- /dev/null
+++ b/core/crypto/hash/hash.odin
@@ -0,0 +1,116 @@
+package crypto_hash
+
+/*
+ Copyright 2021 zhibog
+ Made available under the BSD-3 license.
+
+ List of contributors:
+ zhibog, dotbmp: Initial implementation.
+*/
+
+import "core:io"
+import "core:mem"
+import "core:os"
+
+// hash_bytes will hash the given input and return the computed digest
+// in a newly allocated slice.
+hash_string :: proc(algorithm: Algorithm, data: string, allocator := context.allocator) -> []byte {
+ return hash_bytes(algorithm, transmute([]byte)(data), allocator)
+}
+
+// hash_bytes will hash the given input and return the computed digest
+// in a newly allocated slice.
+hash_bytes :: proc(algorithm: Algorithm, data: []byte, allocator := context.allocator) -> []byte {
+ dst := make([]byte, DIGEST_SIZES[algorithm], allocator)
+ hash_bytes_to_buffer(algorithm, data, dst)
+ return dst
+}
+
+// hash_string_to_buffer will hash the given input and assign the
+// computed digest to the third parameter. It requires that the
+// destination buffer is at least as big as the digest size.
+hash_string_to_buffer :: proc(algorithm: Algorithm, data: string, hash: []byte) {
+ hash_bytes_to_buffer(algorithm, transmute([]byte)(data), hash)
+}
+
+// hash_bytes_to_buffer will hash the given input and write the
+// computed digest into the third parameter. It requires that the
+// destination buffer is at least as big as the digest size.
+hash_bytes_to_buffer :: proc(algorithm: Algorithm, data, hash: []byte) {
+ ctx: Context
+
+ init(&ctx, algorithm)
+ update(&ctx, data)
+ final(&ctx, hash)
+}
+
+// hash_stream will incrementally fully consume a stream, and return the
+// computed digest in a newly allocated slice.
+hash_stream :: proc(
+ algorithm: Algorithm,
+ s: io.Stream,
+ allocator := context.allocator,
+) -> (
+ []byte,
+ io.Error,
+) {
+ ctx: Context
+
+ buf: [MAX_BLOCK_SIZE * 4]byte
+ defer mem.zero_explicit(&buf, size_of(buf))
+
+ init(&ctx, algorithm)
+
+ loop: for {
+ n, err := io.read(s, buf[:])
+ if n > 0 {
+ // XXX/yawning: Can io.read return n > 0 and EOF?
+ update(&ctx, buf[:n])
+ }
+ #partial switch err {
+ case .None:
+ case .EOF:
+ break loop
+ case:
+ return nil, err
+ }
+ }
+
+ dst := make([]byte, DIGEST_SIZES[algorithm], allocator)
+ final(&ctx, dst)
+
+ return dst, io.Error.None
+}
+
+// hash_file will read the file provided by the given handle and return the
+// computed digest in a newly allocated slice.
+hash_file :: proc(
+ algorithm: Algorithm,
+ hd: os.Handle,
+ load_at_once := false,
+ allocator := context.allocator,
+) -> (
+ []byte,
+ io.Error,
+) {
+ if !load_at_once {
+ return hash_stream(algorithm, os.stream_from_handle(hd), allocator)
+ }
+
+ buf, ok := os.read_entire_file(hd, allocator)
+ if !ok {
+ return nil, io.Error.Unknown
+ }
+ defer delete(buf, allocator)
+
+ return hash_bytes(algorithm, buf, allocator), io.Error.None
+}
+
+hash :: proc {
+ hash_stream,
+ hash_file,
+ hash_bytes,
+ hash_string,
+ hash_bytes_to_buffer,
+ hash_string_to_buffer,
+}
diff --git a/core/crypto/hash/low_level.odin b/core/crypto/hash/low_level.odin
new file mode 100644
index 000000000..242eadd5f
--- /dev/null
+++ b/core/crypto/hash/low_level.odin
@@ -0,0 +1,353 @@
+package crypto_hash
+
+import "core:crypto/blake2b"
+import "core:crypto/blake2s"
+import "core:crypto/sha2"
+import "core:crypto/sha3"
+import "core:crypto/sm3"
+import "core:crypto/legacy/keccak"
+import "core:crypto/legacy/md5"
+import "core:crypto/legacy/sha1"
+
+import "core:reflect"
+
+// MAX_DIGEST_SIZE is the maximum size digest that can be returned by any
+// of the Algorithms supported via this package.
+MAX_DIGEST_SIZE :: 64
+// MAX_BLOCK_SIZE is the maximum block size used by any of Algorithms
+// supported by this package.
+MAX_BLOCK_SIZE :: sha3.BLOCK_SIZE_224
+
+// Algorithm is the algorithm identifier associated with a given Context.
+Algorithm :: enum {
+ Invalid,
+ BLAKE2B,
+ BLAKE2S,
+ SHA224,
+ SHA256,
+ SHA384,
+ SHA512,
+ SHA512_256,
+ SHA3_224,
+ SHA3_256,
+ SHA3_384,
+ SHA3_512,
+ SM3,
+ Legacy_KECCAK_224,
+ Legacy_KECCAK_256,
+ Legacy_KECCAK_384,
+ Legacy_KECCAK_512,
+ Insecure_MD5,
+ Insecure_SHA1,
+}
+
+// ALGORITHM_NAMES is the Algorithm to algorithm name string.
+ALGORITHM_NAMES := [Algorithm]string {
+ .Invalid = "Invalid",
+ .BLAKE2B = "BLAKE2b",
+ .BLAKE2S = "BLAKE2s",
+ .SHA224 = "SHA-224",
+ .SHA256 = "SHA-256",
+ .SHA384 = "SHA-384",
+ .SHA512 = "SHA-512",
+ .SHA512_256 = "SHA-512/256",
+ .SHA3_224 = "SHA3-224",
+ .SHA3_256 = "SHA3-256",
+ .SHA3_384 = "SHA3-384",
+ .SHA3_512 = "SHA3-512",
+ .SM3 = "SM3",
+ .Legacy_KECCAK_224 = "Keccak-224",
+ .Legacy_KECCAK_256 = "Keccak-256",
+ .Legacy_KECCAK_384 = "Keccak-384",
+ .Legacy_KECCAK_512 = "Keccak-512",
+ .Insecure_MD5 = "MD5",
+ .Insecure_SHA1 = "SHA-1",
+}
+
+// DIGEST_SIZES is the Algorithm to digest size in bytes.
+DIGEST_SIZES := [Algorithm]int {
+ .Invalid = 0,
+ .BLAKE2B = blake2b.DIGEST_SIZE,
+ .BLAKE2S = blake2s.DIGEST_SIZE,
+ .SHA224 = sha2.DIGEST_SIZE_224,
+ .SHA256 = sha2.DIGEST_SIZE_256,
+ .SHA384 = sha2.DIGEST_SIZE_384,
+ .SHA512 = sha2.DIGEST_SIZE_512,
+ .SHA512_256 = sha2.DIGEST_SIZE_512_256,
+ .SHA3_224 = sha3.DIGEST_SIZE_224,
+ .SHA3_256 = sha3.DIGEST_SIZE_256,
+ .SHA3_384 = sha3.DIGEST_SIZE_384,
+ .SHA3_512 = sha3.DIGEST_SIZE_512,
+ .SM3 = sm3.DIGEST_SIZE,
+ .Legacy_KECCAK_224 = keccak.DIGEST_SIZE_224,
+ .Legacy_KECCAK_256 = keccak.DIGEST_SIZE_256,
+ .Legacy_KECCAK_384 = keccak.DIGEST_SIZE_384,
+ .Legacy_KECCAK_512 = keccak.DIGEST_SIZE_512,
+ .Insecure_MD5 = md5.DIGEST_SIZE,
+ .Insecure_SHA1 = sha1.DIGEST_SIZE,
+}
+
+// BLOCK_SIZES is the Algoritm to block size in bytes.
+BLOCK_SIZES := [Algorithm]int {
+ .Invalid = 0,
+ .BLAKE2B = blake2b.BLOCK_SIZE,
+ .BLAKE2S = blake2s.BLOCK_SIZE,
+ .SHA224 = sha2.BLOCK_SIZE_256,
+ .SHA256 = sha2.BLOCK_SIZE_256,
+ .SHA384 = sha2.BLOCK_SIZE_512,
+ .SHA512 = sha2.BLOCK_SIZE_512,
+ .SHA512_256 = sha2.BLOCK_SIZE_512,
+ .SHA3_224 = sha3.BLOCK_SIZE_224,
+ .SHA3_256 = sha3.BLOCK_SIZE_256,
+ .SHA3_384 = sha3.BLOCK_SIZE_384,
+ .SHA3_512 = sha3.BLOCK_SIZE_512,
+ .SM3 = sm3.BLOCK_SIZE,
+ .Legacy_KECCAK_224 = keccak.BLOCK_SIZE_224,
+ .Legacy_KECCAK_256 = keccak.BLOCK_SIZE_256,
+ .Legacy_KECCAK_384 = keccak.BLOCK_SIZE_384,
+ .Legacy_KECCAK_512 = keccak.BLOCK_SIZE_512,
+ .Insecure_MD5 = md5.BLOCK_SIZE,
+ .Insecure_SHA1 = sha1.BLOCK_SIZE,
+}
+
+// Context is a concrete instantiation of a specific hash algorithm.
+Context :: struct {
+ _algo: Algorithm,
+ _impl: union {
+ blake2b.Context,
+ blake2s.Context,
+ sha2.Context_256,
+ sha2.Context_512,
+ sha3.Context,
+ sm3.Context,
+ keccak.Context,
+ md5.Context,
+ sha1.Context,
+ },
+}
+
+@(private)
+_IMPL_IDS := [Algorithm]typeid {
+ .Invalid = nil,
+ .BLAKE2B = typeid_of(blake2b.Context),
+ .BLAKE2S = typeid_of(blake2s.Context),
+ .SHA224 = typeid_of(sha2.Context_256),
+ .SHA256 = typeid_of(sha2.Context_256),
+ .SHA384 = typeid_of(sha2.Context_512),
+ .SHA512 = typeid_of(sha2.Context_512),
+ .SHA512_256 = typeid_of(sha2.Context_512),
+ .SHA3_224 = typeid_of(sha3.Context),
+ .SHA3_256 = typeid_of(sha3.Context),
+ .SHA3_384 = typeid_of(sha3.Context),
+ .SHA3_512 = typeid_of(sha3.Context),
+ .SM3 = typeid_of(sm3.Context),
+ .Legacy_KECCAK_224 = typeid_of(keccak.Context),
+ .Legacy_KECCAK_256 = typeid_of(keccak.Context),
+ .Legacy_KECCAK_384 = typeid_of(keccak.Context),
+ .Legacy_KECCAK_512 = typeid_of(keccak.Context),
+ .Insecure_MD5 = typeid_of(md5.Context),
+ .Insecure_SHA1 = typeid_of(sha1.Context),
+}
+
+// init initializes a Context with a specific hash Algorithm.
+init :: proc(ctx: ^Context, algorithm: Algorithm) {
+ if ctx._impl != nil {
+ reset(ctx)
+ }
+
+ // Directly specialize the union by setting the type ID (save a copy).
+ reflect.set_union_variant_typeid(
+ ctx._impl,
+ _IMPL_IDS[algorithm],
+ )
+ switch algorithm {
+ case .BLAKE2B:
+ blake2b.init(&ctx._impl.(blake2b.Context))
+ case .BLAKE2S:
+ blake2s.init(&ctx._impl.(blake2s.Context))
+ case .SHA224:
+ sha2.init_224(&ctx._impl.(sha2.Context_256))
+ case .SHA256:
+ sha2.init_256(&ctx._impl.(sha2.Context_256))
+ case .SHA384:
+ sha2.init_384(&ctx._impl.(sha2.Context_512))
+ case .SHA512:
+ sha2.init_512(&ctx._impl.(sha2.Context_512))
+ case .SHA512_256:
+ sha2.init_512_256(&ctx._impl.(sha2.Context_512))
+ case .SHA3_224:
+ sha3.init_224(&ctx._impl.(sha3.Context))
+ case .SHA3_256:
+ sha3.init_256(&ctx._impl.(sha3.Context))
+ case .SHA3_384:
+ sha3.init_384(&ctx._impl.(sha3.Context))
+ case .SHA3_512:
+ sha3.init_512(&ctx._impl.(sha3.Context))
+ case .SM3:
+ sm3.init(&ctx._impl.(sm3.Context))
+ case .Legacy_KECCAK_224:
+ keccak.init_224(&ctx._impl.(keccak.Context))
+ case .Legacy_KECCAK_256:
+ keccak.init_256(&ctx._impl.(keccak.Context))
+ case .Legacy_KECCAK_384:
+ keccak.init_384(&ctx._impl.(keccak.Context))
+ case .Legacy_KECCAK_512:
+ keccak.init_512(&ctx._impl.(keccak.Context))
+ case .Insecure_MD5:
+ md5.init(&ctx._impl.(md5.Context))
+ case .Insecure_SHA1:
+ sha1.init(&ctx._impl.(sha1.Context))
+ case .Invalid:
+ panic("crypto/hash: uninitialized algorithm")
+ case:
+ panic("crypto/hash: invalid algorithm")
+ }
+
+ ctx._algo = algorithm
+}
+
+// update adds more data to the Context.
+update :: proc(ctx: ^Context, data: []byte) {
+ switch &impl in ctx._impl {
+ case blake2b.Context:
+ blake2b.update(&impl, data)
+ case blake2s.Context:
+ blake2s.update(&impl, data)
+ case sha2.Context_256:
+ sha2.update(&impl, data)
+ case sha2.Context_512:
+ sha2.update(&impl, data)
+ case sha3.Context:
+ sha3.update(&impl, data)
+ case sm3.Context:
+ sm3.update(&impl, data)
+ case keccak.Context:
+ keccak.update(&impl, data)
+ case md5.Context:
+ md5.update(&impl, data)
+ case sha1.Context:
+ sha1.update(&impl, data)
+ case:
+ panic("crypto/hash: uninitialized algorithm")
+ }
+}
+
+// final finalizes the Context, writes the digest to hash, and calls
+// reset on the Context.
+//
+// Iff finalize_clone is set, final will work on a copy of the Context,
+// which is useful for for calculating rolling digests.
+final :: proc(ctx: ^Context, hash: []byte, finalize_clone: bool = false) {
+ switch &impl in ctx._impl {
+ case blake2b.Context:
+ blake2b.final(&impl, hash, finalize_clone)
+ case blake2s.Context:
+ blake2s.final(&impl, hash, finalize_clone)
+ case sha2.Context_256:
+ sha2.final(&impl, hash, finalize_clone)
+ case sha2.Context_512:
+ sha2.final(&impl, hash, finalize_clone)
+ case sha3.Context:
+ sha3.final(&impl, hash, finalize_clone)
+ case sm3.Context:
+ sm3.final(&impl, hash, finalize_clone)
+ case keccak.Context:
+ keccak.final(&impl, hash, finalize_clone)
+ case md5.Context:
+ md5.final(&impl, hash, finalize_clone)
+ case sha1.Context:
+ sha1.final(&impl, hash, finalize_clone)
+ case:
+ panic("crypto/hash: uninitialized algorithm")
+ }
+
+ if !finalize_clone {
+ reset(ctx)
+ }
+}
+
+// clone clones the Context other into ctx.
+clone :: proc(ctx, other: ^Context) {
+ // XXX/yawning: Maybe these cases should panic, because both cases,
+ // are probably bugs.
+ if ctx == other {
+ return
+ }
+ if ctx._impl != nil {
+ reset(ctx)
+ }
+
+ ctx._algo = other._algo
+
+ reflect.set_union_variant_typeid(
+ ctx._impl,
+ reflect.union_variant_typeid(other._impl),
+ )
+ switch &src_impl in other._impl {
+ case blake2b.Context:
+ blake2b.clone(&ctx._impl.(blake2b.Context), &src_impl)
+ case blake2s.Context:
+ blake2s.clone(&ctx._impl.(blake2s.Context), &src_impl)
+ case sha2.Context_256:
+ sha2.clone(&ctx._impl.(sha2.Context_256), &src_impl)
+ case sha2.Context_512:
+ sha2.clone(&ctx._impl.(sha2.Context_512), &src_impl)
+ case sha3.Context:
+ sha3.clone(&ctx._impl.(sha3.Context), &src_impl)
+ case sm3.Context:
+ sm3.clone(&ctx._impl.(sm3.Context), &src_impl)
+ case keccak.Context:
+ keccak.clone(&ctx._impl.(keccak.Context), &src_impl)
+ case md5.Context:
+ md5.clone(&ctx._impl.(md5.Context), &src_impl)
+ case sha1.Context:
+ sha1.clone(&ctx._impl.(sha1.Context), &src_impl)
+ case:
+ panic("crypto/hash: uninitialized algorithm")
+ }
+}
+
+// reset sanitizes the Context. The Context must be re-initialized to
+// be used again.
+reset :: proc(ctx: ^Context) {
+ switch &impl in ctx._impl {
+ case blake2b.Context:
+ blake2b.reset(&impl)
+ case blake2s.Context:
+ blake2s.reset(&impl)
+ case sha2.Context_256:
+ sha2.reset(&impl)
+ case sha2.Context_512:
+ sha2.reset(&impl)
+ case sha3.Context:
+ sha3.reset(&impl)
+ case sm3.Context:
+ sm3.reset(&impl)
+ case keccak.Context:
+ keccak.reset(&impl)
+ case md5.Context:
+ md5.reset(&impl)
+ case sha1.Context:
+ sha1.reset(&impl)
+ case:
+ // Unlike clone, calling reset repeatedly is fine.
+ }
+
+ ctx._algo = .Invalid
+ ctx._impl = nil
+}
+
+// algorithm returns the Algorithm used by a Context instance.
+algorithm :: proc(ctx: ^Context) -> Algorithm {
+ return ctx._algo
+}
+
+// digest_size returns the digest size of a Context instance in bytes.
+digest_size :: proc(ctx: ^Context) -> int {
+ return DIGEST_SIZES[ctx._algo]
+}
+
+// block_size returns the block size of a Context instance in bytes.
+block_size :: proc(ctx: ^Context) -> int {
+ return BLOCK_SIZES[ctx._algo]
+}
diff --git a/core/crypto/hmac/hmac.odin b/core/crypto/hmac/hmac.odin
new file mode 100644
index 000000000..f720d2181
--- /dev/null
+++ b/core/crypto/hmac/hmac.odin
@@ -0,0 +1,162 @@
+/*
+package hmac implements the HMAC MAC algorithm.
+
+See:
+- https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.198-1.pdf
+*/
+package hmac
+
+import "core:crypto"
+import "core:crypto/hash"
+import "core:mem"
+
+// sum will compute the HMAC with the specified algorithm and key
+// over msg, and write the computed digest to dst. It requires that
+// the dst buffer is the tag size.
+sum :: proc(algorithm: hash.Algorithm, dst, msg, key: []byte) {
+ ctx: Context
+
+ init(&ctx, algorithm, key)
+ update(&ctx, msg)
+ final(&ctx, dst)
+}
+
+// verify will verify the HMAC tag computed with the specified algorithm
+// and key over msg and return true iff the tag is valid. It requires
+// that the tag is correctly sized.
+verify :: proc(algorithm: hash.Algorithm, tag, msg, key: []byte) -> bool {
+ tag_buf: [hash.MAX_DIGEST_SIZE]byte
+
+ derived_tag := tag_buf[:hash.DIGEST_SIZES[algorithm]]
+ sum(algorithm, derived_tag, msg, key)
+
+ return crypto.compare_constant_time(derived_tag, tag) == 1
+}
+
+// Context is a concrete instantiation of HMAC with a specific hash
+// algorithm.
+Context :: struct {
+ _o_hash: hash.Context, // H(k ^ ipad) (not finalized)
+ _i_hash: hash.Context, // H(k ^ opad) (not finalized)
+ _tag_sz: int,
+ _is_initialized: bool,
+}
+
+// init initializes a Context with a specific hash Algorithm and key.
+init :: proc(ctx: ^Context, algorithm: hash.Algorithm, key: []byte) {
+ if ctx._is_initialized {
+ reset(ctx)
+ }
+
+ _init_hashes(ctx, algorithm, key)
+
+ ctx._tag_sz = hash.DIGEST_SIZES[algorithm]
+ ctx._is_initialized = true
+}
+
+// update adds more data to the Context.
+update :: proc(ctx: ^Context, data: []byte) {
+ assert(ctx._is_initialized)
+
+ hash.update(&ctx._i_hash, data)
+}
+
+// final finalizes the Context, writes the tag to dst, and calls
+// reset on the Context.
+final :: proc(ctx: ^Context, dst: []byte) {
+ assert(ctx._is_initialized)
+
+ defer (reset(ctx))
+
+ if len(dst) != ctx._tag_sz {
+ panic("crypto/hmac: invalid destination tag size")
+ }
+
+ hash.final(&ctx._i_hash, dst) // H((k ^ ipad) || text)
+
+ hash.update(&ctx._o_hash, dst) // H((k ^ opad) || H((k ^ ipad) || text))
+ hash.final(&ctx._o_hash, dst)
+}
+
+// reset sanitizes the Context. The Context must be re-initialized to
+// be used again.
+reset :: proc(ctx: ^Context) {
+ if !ctx._is_initialized {
+ return
+ }
+
+ hash.reset(&ctx._o_hash)
+ hash.reset(&ctx._i_hash)
+ ctx._tag_sz = 0
+ ctx._is_initialized = false
+}
+
+// algorithm returns the Algorithm used by a Context instance.
+algorithm :: proc(ctx: ^Context) -> hash.Algorithm {
+ assert(ctx._is_initialized)
+
+ return hash.algorithm(&ctx._i_hash)
+}
+
+// tag_size returns the tag size of a Context instance in bytes.
+tag_size :: proc(ctx: ^Context) -> int {
+ assert(ctx._is_initialized)
+
+ return ctx._tag_sz
+}
+
+@(private)
+_I_PAD :: 0x36
+_O_PAD :: 0x5c
+
+@(private)
+_init_hashes :: proc(ctx: ^Context, algorithm: hash.Algorithm, key: []byte) {
+ K0_buf: [hash.MAX_BLOCK_SIZE]byte
+ kPad_buf: [hash.MAX_BLOCK_SIZE]byte
+
+ kLen := len(key)
+ B := hash.BLOCK_SIZES[algorithm]
+ K0 := K0_buf[:B]
+ defer mem.zero_explicit(raw_data(K0), B)
+
+ switch {
+ case kLen == B, kLen < B:
+ // If the length of K = B: set K0 = K.
+ //
+ // If the length of K < B: append zeros to the end of K to
+ // create a B-byte string K0 (e.g., if K is 20 bytes in
+ // length and B = 64, then K will be appended with 44 zero
+ // bytes x’00’).
+ //
+ // K0 is zero-initialized, so the copy handles both cases.
+ copy(K0, key)
+ case kLen > B:
+ // If the length of K > B: hash K to obtain an L byte string,
+ // then append (B-L) zeros to create a B-byte string K0
+ // (i.e., K0 = H(K) || 00...00).
+ tmpCtx := &ctx._o_hash // Saves allocating a hash.Context.
+ hash.init(tmpCtx, algorithm)
+ hash.update(tmpCtx, key)
+ hash.final(tmpCtx, K0)
+ }
+
+ // Initialize the hashes, and write the padded keys:
+ // - ctx._i_hash -> H(K0 ^ ipad)
+ // - ctx._o_hash -> H(K0 ^ opad)
+
+ hash.init(&ctx._o_hash, algorithm)
+ hash.init(&ctx._i_hash, algorithm)
+
+ kPad := kPad_buf[:B]
+ defer mem.zero_explicit(raw_data(kPad), B)
+
+ for v, i in K0 {
+ kPad[i] = v ~ _I_PAD
+ }
+ hash.update(&ctx._i_hash, kPad)
+
+ for v, i in K0 {
+ kPad[i] = v ~ _O_PAD
+ }
+ hash.update(&ctx._o_hash, kPad)
+}
diff --git a/core/crypto/legacy/keccak/keccak.odin b/core/crypto/legacy/keccak/keccak.odin
index 09db853a6..00ad06ad9 100644
--- a/core/crypto/legacy/keccak/keccak.odin
+++ b/core/crypto/legacy/keccak/keccak.odin
@@ -1,3 +1,11 @@
+/*
+package keccak implements the Keccak hash algorithm family.
+
+During the SHA-3 standardization process, the padding scheme was changed
+thus Keccac and SHA-3 produce different outputs. Most users should use
+SHA-3 and/or SHAKE instead, however the legacy algorithm is provided for
+backward compatibility purposes.
+*/
package keccak
/*
@@ -6,372 +14,82 @@ package keccak
List of contributors:
zhibog, dotbmp: Initial implementation.
-
- Interface for the Keccak hashing algorithm.
- This is done because the padding in the SHA3 standard was changed by the NIST, resulting in a different output.
*/
-import "core:io"
-import "core:os"
-
import "../../_sha3"
-/*
- High level API
-*/
-
+// DIGEST_SIZE_224 is the Keccak-224 digest size.
DIGEST_SIZE_224 :: 28
+// DIGEST_SIZE_256 is the Keccak-256 digest size.
DIGEST_SIZE_256 :: 32
+// DIGEST_SIZE_384 is the Keccak-384 digest size.
DIGEST_SIZE_384 :: 48
+// DIGEST_SIZE_512 is the Keccak-512 digest size.
DIGEST_SIZE_512 :: 64
-// hash_string_224 will hash the given input and return the
-// computed hash
-hash_string_224 :: proc(data: string) -> [DIGEST_SIZE_224]byte {
- return hash_bytes_224(transmute([]byte)(data))
-}
+// BLOCK_SIZE_224 is the Keccak-224 block size in bytes.
+BLOCK_SIZE_224 :: _sha3.RATE_224
+// BLOCK_SIZE_256 is the Keccak-256 block size in bytes.
+BLOCK_SIZE_256 :: _sha3.RATE_256
+// BLOCK_SIZE_384 is the Keccak-384 block size in bytes.
+BLOCK_SIZE_384 :: _sha3.RATE_384
+// BLOCK_SIZE_512 is the Keccak-512 block size in bytes.
+BLOCK_SIZE_512 :: _sha3.RATE_512
-// hash_bytes_224 will hash the given input and return the
-// computed hash
-hash_bytes_224 :: proc(data: []byte) -> [DIGEST_SIZE_224]byte {
- hash: [DIGEST_SIZE_224]byte
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_224
- ctx.is_keccak = true
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer_224 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_224 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_224(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer_224 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_224 :: proc(data, hash: []byte) {
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_224
- ctx.is_keccak = true
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
+// Context is a Keccak instance.
+Context :: distinct _sha3.Context
-// hash_stream_224 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_224 :: proc(s: io.Stream) -> ([DIGEST_SIZE_224]byte, bool) {
- hash: [DIGEST_SIZE_224]byte
- ctx: Context
+// init_224 initializes a Context for Keccak-224.
+init_224 :: proc(ctx: ^Context) {
ctx.mdlen = DIGEST_SIZE_224
- ctx.is_keccak = true
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
+ _init(ctx)
}
-// hash_file_224 will read the file provided by the given handle
-// and compute a hash
-hash_file_224 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_224]byte, bool) {
- if !load_at_once {
- return hash_stream_224(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_224(buf[:]), ok
- }
- }
- return [DIGEST_SIZE_224]byte{}, false
-}
-
-hash_224 :: proc {
- hash_stream_224,
- hash_file_224,
- hash_bytes_224,
- hash_string_224,
- hash_bytes_to_buffer_224,
- hash_string_to_buffer_224,
-}
-
-// hash_string_256 will hash the given input and return the
-// computed hash
-hash_string_256 :: proc(data: string) -> [DIGEST_SIZE_256]byte {
- return hash_bytes_256(transmute([]byte)(data))
-}
-
-// hash_bytes_256 will hash the given input and return the
-// computed hash
-hash_bytes_256 :: proc(data: []byte) -> [DIGEST_SIZE_256]byte {
- hash: [DIGEST_SIZE_256]byte
- ctx: Context
+// init_256 initializes a Context for Keccak-256.
+init_256 :: proc(ctx: ^Context) {
ctx.mdlen = DIGEST_SIZE_256
- ctx.is_keccak = true
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer_256 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_256 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_256(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer_256 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_256 :: proc(data, hash: []byte) {
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_256
- ctx.is_keccak = true
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
-
-// hash_stream_256 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_256 :: proc(s: io.Stream) -> ([DIGEST_SIZE_256]byte, bool) {
- hash: [DIGEST_SIZE_256]byte
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_256
- ctx.is_keccak = true
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
-}
-
-// hash_file_256 will read the file provided by the given handle
-// and compute a hash
-hash_file_256 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_256]byte, bool) {
- if !load_at_once {
- return hash_stream_256(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_256(buf[:]), ok
- }
- }
- return [DIGEST_SIZE_256]byte{}, false
+ _init(ctx)
}
-hash_256 :: proc {
- hash_stream_256,
- hash_file_256,
- hash_bytes_256,
- hash_string_256,
- hash_bytes_to_buffer_256,
- hash_string_to_buffer_256,
-}
-
-// hash_string_384 will hash the given input and return the
-// computed hash
-hash_string_384 :: proc(data: string) -> [DIGEST_SIZE_384]byte {
- return hash_bytes_384(transmute([]byte)(data))
-}
-
-// hash_bytes_384 will hash the given input and return the
-// computed hash
-hash_bytes_384 :: proc(data: []byte) -> [DIGEST_SIZE_384]byte {
- hash: [DIGEST_SIZE_384]byte
- ctx: Context
+// init_384 initializes a Context for Keccak-384.
+init_384 :: proc(ctx: ^Context) {
ctx.mdlen = DIGEST_SIZE_384
- ctx.is_keccak = true
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer_384 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_384 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_384(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer_384 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_384 :: proc(data, hash: []byte) {
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_384
- ctx.is_keccak = true
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
-
-// hash_stream_384 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_384 :: proc(s: io.Stream) -> ([DIGEST_SIZE_384]byte, bool) {
- hash: [DIGEST_SIZE_384]byte
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_384
- ctx.is_keccak = true
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
-}
-
-// hash_file_384 will read the file provided by the given handle
-// and compute a hash
-hash_file_384 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_384]byte, bool) {
- if !load_at_once {
- return hash_stream_384(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_384(buf[:]), ok
- }
- }
- return [DIGEST_SIZE_384]byte{}, false
-}
-
-hash_384 :: proc {
- hash_stream_384,
- hash_file_384,
- hash_bytes_384,
- hash_string_384,
- hash_bytes_to_buffer_384,
- hash_string_to_buffer_384,
+ _init(ctx)
}
-// hash_string_512 will hash the given input and return the
-// computed hash
-hash_string_512 :: proc(data: string) -> [DIGEST_SIZE_512]byte {
- return hash_bytes_512(transmute([]byte)(data))
-}
-
-// hash_bytes_512 will hash the given input and return the
-// computed hash
-hash_bytes_512 :: proc(data: []byte) -> [DIGEST_SIZE_512]byte {
- hash: [DIGEST_SIZE_512]byte
- ctx: Context
+// init_512 initializes a Context for Keccak-512.
+init_512 :: proc(ctx: ^Context) {
ctx.mdlen = DIGEST_SIZE_512
- ctx.is_keccak = true
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
+ _init(ctx)
}
-// hash_string_to_buffer_512 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_512 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_512(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer_512 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_512 :: proc(data, hash: []byte) {
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_512
+@(private)
+_init :: proc(ctx: ^Context) {
ctx.is_keccak = true
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
+ _sha3.init(transmute(^_sha3.Context)(ctx))
}
-// hash_stream_512 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_512 :: proc(s: io.Stream) -> ([DIGEST_SIZE_512]byte, bool) {
- hash: [DIGEST_SIZE_512]byte
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_512
- ctx.is_keccak = true
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
-}
-
-// hash_file_512 will read the file provided by the given handle
-// and compute a hash
-hash_file_512 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_512]byte, bool) {
- if !load_at_once {
- return hash_stream_512(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_512(buf[:]), ok
- }
- }
- return [DIGEST_SIZE_512]byte{}, false
-}
-
-hash_512 :: proc {
- hash_stream_512,
- hash_file_512,
- hash_bytes_512,
- hash_string_512,
- hash_bytes_to_buffer_512,
- hash_string_to_buffer_512,
+// update adds more data to the Context.
+update :: proc(ctx: ^Context, data: []byte) {
+ _sha3.update(transmute(^_sha3.Context)(ctx), data)
}
-/*
- Low level API
-*/
-
-Context :: _sha3.Sha3_Context
-
-init :: proc(ctx: ^Context) {
- ctx.is_keccak = true
- _sha3.init(ctx)
+// final finalizes the Context, writes the digest to hash, and calls
+// reset on the Context.
+//
+// Iff finalize_clone is set, final will work on a copy of the Context,
+// which is useful for for calculating rolling digests.
+final :: proc(ctx: ^Context, hash: []byte, finalize_clone: bool = false) {
+ _sha3.final(transmute(^_sha3.Context)(ctx), hash, finalize_clone)
}
-update :: proc(ctx: ^Context, data: []byte) {
- _sha3.update(ctx, data)
+// clone clones the Context other into ctx.
+clone :: proc(ctx, other: ^Context) {
+ _sha3.clone(transmute(^_sha3.Context)(ctx), transmute(^_sha3.Context)(other))
}
-final :: proc(ctx: ^Context, hash: []byte) {
- _sha3.final(ctx, hash)
+// reset sanitizes the Context. The Context must be re-initialized to
+// be used again.
+reset :: proc(ctx: ^Context) {
+ _sha3.reset(transmute(^_sha3.Context)(ctx))
}
diff --git a/core/crypto/legacy/md5/md5.odin b/core/crypto/legacy/md5/md5.odin
index 69ae087e4..c744a9bcf 100644
--- a/core/crypto/legacy/md5/md5.odin
+++ b/core/crypto/legacy/md5/md5.odin
@@ -1,3 +1,13 @@
+/*
+package md5 implements the MD5 hash algorithm.
+
+WARNING: The MD5 algorithm is known to be insecure and should only be
+used for interoperating with legacy applications.
+
+See:
+- https://eprint.iacr.org/2005/075
+- https://datatracker.ietf.org/doc/html/rfc1321
+*/
package md5
/*
@@ -6,103 +16,29 @@ package md5
List of contributors:
zhibog, dotbmp: Initial implementation.
-
- Implementation of the MD5 hashing algorithm, as defined in RFC 1321 <https://datatracker.ietf.org/doc/html/rfc1321>
*/
import "core:encoding/endian"
-import "core:io"
import "core:math/bits"
import "core:mem"
-import "core:os"
-
-/*
- High level API
-*/
+// DIGEST_SIZE is the MD5 digest size in bytes.
DIGEST_SIZE :: 16
-// hash_string will hash the given input and return the
-// computed hash
-hash_string :: proc(data: string) -> [DIGEST_SIZE]byte {
- return hash_bytes(transmute([]byte)(data))
-}
-
-// hash_bytes will hash the given input and return the
-// computed hash
-hash_bytes :: proc(data: []byte) -> [DIGEST_SIZE]byte {
- hash: [DIGEST_SIZE]byte
- ctx: Context
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer :: proc(data, hash: []byte) {
- ctx: Context
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
-
-// hash_stream will read the stream in chunks and compute a
-// hash from its contents
-hash_stream :: proc(s: io.Stream) -> ([DIGEST_SIZE]byte, bool) {
- hash: [DIGEST_SIZE]byte
- ctx: Context
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
-}
+// BLOCK_SIZE is the MD5 block size in bytes.
+BLOCK_SIZE :: 64
-// hash_file will read the file provided by the given handle
-// and compute a hash
-hash_file :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE]byte, bool) {
- if !load_at_once {
- return hash_stream(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes(buf[:]), ok
- }
- }
- return [DIGEST_SIZE]byte{}, false
-}
+// Context is a MD5 instance.
+Context :: struct {
+ data: [BLOCK_SIZE]byte,
+ state: [4]u32,
+ bitlen: u64,
+ datalen: u32,
-hash :: proc {
- hash_stream,
- hash_file,
- hash_bytes,
- hash_string,
- hash_bytes_to_buffer,
- hash_string_to_buffer,
+ is_initialized: bool,
}
-/*
- Low level API
-*/
-
+// init initializes a Context.
init :: proc(ctx: ^Context) {
ctx.state[0] = 0x67452301
ctx.state[1] = 0xefcdab89
@@ -115,6 +51,7 @@ init :: proc(ctx: ^Context) {
ctx.is_initialized = true
}
+// update adds more data to the Context.
update :: proc(ctx: ^Context, data: []byte) {
assert(ctx.is_initialized)
@@ -129,13 +66,26 @@ update :: proc(ctx: ^Context, data: []byte) {
}
}
-final :: proc(ctx: ^Context, hash: []byte) {
+// final finalizes the Context, writes the digest to hash, and calls
+// reset on the Context.
+//
+// Iff finalize_clone is set, final will work on a copy of the Context,
+// which is useful for for calculating rolling digests.
+final :: proc(ctx: ^Context, hash: []byte, finalize_clone: bool = false) {
assert(ctx.is_initialized)
if len(hash) < DIGEST_SIZE {
panic("crypto/md5: invalid destination digest size")
}
+ ctx := ctx
+ if finalize_clone {
+ tmp_ctx: Context
+ clone(&tmp_ctx, ctx)
+ ctx = &tmp_ctx
+ }
+ defer(reset(ctx))
+
i := ctx.datalen
if ctx.datalen < 56 {
@@ -163,25 +113,27 @@ final :: proc(ctx: ^Context, hash: []byte) {
for i = 0; i < DIGEST_SIZE / 4; i += 1 {
endian.unchecked_put_u32le(hash[i * 4:], ctx.state[i])
}
+}
- ctx.is_initialized = false
+// clone clones the Context other into ctx.
+clone :: proc(ctx, other: ^$T) {
+ ctx^ = other^
+}
+
+// reset sanitizes the Context. The Context must be re-initialized to
+// be used again.
+reset :: proc(ctx: ^$T) {
+ if !ctx.is_initialized {
+ return
+ }
+
+ mem.zero_explicit(ctx, size_of(ctx^))
}
/*
MD5 implementation
*/
-BLOCK_SIZE :: 64
-
-Context :: struct {
- data: [BLOCK_SIZE]byte,
- state: [4]u32,
- bitlen: u64,
- datalen: u32,
-
- is_initialized: bool,
-}
-
/*
@note(zh): F, G, H and I, as mentioned in the RFC, have been inlined into FF, GG, HH
and II respectively, instead of declaring them separately.
diff --git a/core/crypto/legacy/sha1/sha1.odin b/core/crypto/legacy/sha1/sha1.odin
index 6c4407067..8c6e59901 100644
--- a/core/crypto/legacy/sha1/sha1.odin
+++ b/core/crypto/legacy/sha1/sha1.odin
@@ -1,3 +1,14 @@
+/*
+package sha1 implements the SHA1 hash algorithm.
+
+WARNING: The SHA1 algorithm is known to be insecure and should only be
+used for interoperating with legacy applications.
+
+See:
+- https://eprint.iacr.org/2017/190
+- https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf
+- https://datatracker.ietf.org/doc/html/rfc3174
+*/
package sha1
/*
@@ -6,103 +17,30 @@ package sha1
List of contributors:
zhibog, dotbmp: Initial implementation.
-
- Implementation of the SHA1 hashing algorithm, as defined in RFC 3174 <https://datatracker.ietf.org/doc/html/rfc3174>
*/
import "core:encoding/endian"
-import "core:io"
import "core:math/bits"
import "core:mem"
-import "core:os"
-
-/*
- High level API
-*/
+// DIGEST_SIZE is the SHA1 digest size in bytes.
DIGEST_SIZE :: 20
-// hash_string will hash the given input and return the
-// computed hash
-hash_string :: proc(data: string) -> [DIGEST_SIZE]byte {
- return hash_bytes(transmute([]byte)(data))
-}
-
-// hash_bytes will hash the given input and return the
-// computed hash
-hash_bytes :: proc(data: []byte) -> [DIGEST_SIZE]byte {
- hash: [DIGEST_SIZE]byte
- ctx: Context
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer :: proc(data, hash: []byte) {
- ctx: Context
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
-
-// hash_stream will read the stream in chunks and compute a
-// hash from its contents
-hash_stream :: proc(s: io.Stream) -> ([DIGEST_SIZE]byte, bool) {
- hash: [DIGEST_SIZE]byte
- ctx: Context
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
-}
+// BLOCK_SIZE is the SHA1 block size in bytes.
+BLOCK_SIZE :: 64
-// hash_file will read the file provided by the given handle
-// and compute a hash
-hash_file :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE]byte, bool) {
- if !load_at_once {
- return hash_stream(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes(buf[:]), ok
- }
- }
- return [DIGEST_SIZE]byte{}, false
-}
+// Context is a SHA1 instance.
+Context :: struct {
+ data: [BLOCK_SIZE]byte,
+ state: [5]u32,
+ k: [4]u32,
+ bitlen: u64,
+ datalen: u32,
-hash :: proc {
- hash_stream,
- hash_file,
- hash_bytes,
- hash_string,
- hash_bytes_to_buffer,
- hash_string_to_buffer,
+ is_initialized: bool,
}
-/*
- Low level API
-*/
-
+// init initializes a Context.
init :: proc(ctx: ^Context) {
ctx.state[0] = 0x67452301
ctx.state[1] = 0xefcdab89
@@ -120,6 +58,7 @@ init :: proc(ctx: ^Context) {
ctx.is_initialized = true
}
+// update adds more data to the Context.
update :: proc(ctx: ^Context, data: []byte) {
assert(ctx.is_initialized)
@@ -134,13 +73,26 @@ update :: proc(ctx: ^Context, data: []byte) {
}
}
-final :: proc(ctx: ^Context, hash: []byte) {
+// final finalizes the Context, writes the digest to hash, and calls
+// reset on the Context.
+//
+// Iff finalize_clone is set, final will work on a copy of the Context,
+// which is useful for for calculating rolling digests.
+final :: proc(ctx: ^Context, hash: []byte, finalize_clone: bool = false) {
assert(ctx.is_initialized)
if len(hash) < DIGEST_SIZE {
panic("crypto/sha1: invalid destination digest size")
}
+ ctx := ctx
+ if finalize_clone {
+ tmp_ctx: Context
+ clone(&tmp_ctx, ctx)
+ ctx = &tmp_ctx
+ }
+ defer(reset(ctx))
+
i := ctx.datalen
if ctx.datalen < 56 {
@@ -168,26 +120,27 @@ final :: proc(ctx: ^Context, hash: []byte) {
for i = 0; i < DIGEST_SIZE / 4; i += 1 {
endian.unchecked_put_u32be(hash[i * 4:], ctx.state[i])
}
+}
+
+// clone clones the Context other into ctx.
+clone :: proc(ctx, other: ^$T) {
+ ctx^ = other^
+}
+
+// reset sanitizes the Context. The Context must be re-initialized to
+// be used again.
+reset :: proc(ctx: ^$T) {
+ if !ctx.is_initialized {
+ return
+ }
- ctx.is_initialized = false
+ mem.zero_explicit(ctx, size_of(ctx^))
}
/*
SHA1 implementation
*/
-BLOCK_SIZE :: 64
-
-Context :: struct {
- data: [BLOCK_SIZE]byte,
- datalen: u32,
- bitlen: u64,
- state: [5]u32,
- k: [4]u32,
-
- is_initialized: bool,
-}
-
@(private)
transform :: proc "contextless" (ctx: ^Context, data: []byte) {
a, b, c, d, e, i, t: u32
diff --git a/core/crypto/poly1305/poly1305.odin b/core/crypto/poly1305/poly1305.odin
index cf60f7166..a2fb3c223 100644
--- a/core/crypto/poly1305/poly1305.odin
+++ b/core/crypto/poly1305/poly1305.odin
@@ -23,10 +23,6 @@ verify :: proc (tag, msg, key: []byte) -> bool {
ctx: Context = ---
derived_tag: [16]byte = ---
- if len(tag) != TAG_SIZE {
- panic("crypto/poly1305: invalid tag size")
- }
-
init(&ctx, key)
update(&ctx, msg)
final(&ctx, derived_tag[:])
diff --git a/core/crypto/sha2/sha2.odin b/core/crypto/sha2/sha2.odin
index 10ac73ab6..2128e3950 100644
--- a/core/crypto/sha2/sha2.odin
+++ b/core/crypto/sha2/sha2.odin
@@ -1,3 +1,10 @@
+/*
+package sha2 implements the SHA2 hash algorithm family.
+
+See:
+- https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf
+- https://datatracker.ietf.org/doc/html/rfc3874
+*/
package sha2
/*
@@ -6,431 +13,83 @@ package sha2
List of contributors:
zhibog, dotbmp: Initial implementation.
-
- Implementation of the SHA2 hashing algorithm, as defined in <https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf>
- and in RFC 3874 <https://datatracker.ietf.org/doc/html/rfc3874>
*/
import "core:encoding/endian"
-import "core:io"
import "core:math/bits"
-import "core:os"
-
-/*
- High level API
-*/
+import "core:mem"
+// DIGEST_SIZE_224 is the SHA-224 digest size in bytes.
DIGEST_SIZE_224 :: 28
+// DIGEST_SIZE_256 is the SHA-256 digest size in bytes.
DIGEST_SIZE_256 :: 32
+// DIGEST_SIZE_384 is the SHA-384 digest size in bytes.
DIGEST_SIZE_384 :: 48
+// DIGEST_SIZE_512 is the SHA-512 digest size in bytes.
DIGEST_SIZE_512 :: 64
+// DIGEST_SIZE_512_256 is the SHA-512/256 digest size in bytes.
DIGEST_SIZE_512_256 :: 32
-// hash_string_224 will hash the given input and return the
-// computed hash
-hash_string_224 :: proc(data: string) -> [DIGEST_SIZE_224]byte {
- return hash_bytes_224(transmute([]byte)(data))
-}
-
-// hash_bytes_224 will hash the given input and return the
-// computed hash
-hash_bytes_224 :: proc(data: []byte) -> [DIGEST_SIZE_224]byte {
- hash: [DIGEST_SIZE_224]byte
- ctx: Context_256
- ctx.md_bits = 224
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer_224 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_224 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_224(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer_224 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_224 :: proc(data, hash: []byte) {
- ctx: Context_256
- ctx.md_bits = 224
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
-
-// hash_stream_224 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_224 :: proc(s: io.Stream) -> ([DIGEST_SIZE_224]byte, bool) {
- hash: [DIGEST_SIZE_224]byte
- ctx: Context_256
- ctx.md_bits = 224
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
+// BLOCK_SIZE_256 is the SHA-224 and SHA-256 block size in bytes.
+BLOCK_SIZE_256 :: 64
+// BLOCK_SIZE_512 is the SHA-384, SHA-512, and SHA-512/256 block size
+// in bytes.
+BLOCK_SIZE_512 :: 128
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
-}
-
-// hash_file_224 will read the file provided by the given handle
-// and compute a hash
-hash_file_224 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_224]byte, bool) {
- if !load_at_once {
- return hash_stream_224(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_224(buf[:]), ok
- }
- }
- return [DIGEST_SIZE_224]byte{}, false
-}
-
-hash_224 :: proc {
- hash_stream_224,
- hash_file_224,
- hash_bytes_224,
- hash_string_224,
- hash_bytes_to_buffer_224,
- hash_string_to_buffer_224,
-}
+// Context_256 is a SHA-224 or SHA-256 instance.
+Context_256 :: struct {
+ block: [BLOCK_SIZE_256]byte,
+ h: [8]u32,
+ bitlength: u64,
+ length: u64,
+ md_bits: int,
-// hash_string_256 will hash the given input and return the
-// computed hash
-hash_string_256 :: proc(data: string) -> [DIGEST_SIZE_256]byte {
- return hash_bytes_256(transmute([]byte)(data))
+ is_initialized: bool,
}
-// hash_bytes_256 will hash the given input and return the
-// computed hash
-hash_bytes_256 :: proc(data: []byte) -> [DIGEST_SIZE_256]byte {
- hash: [DIGEST_SIZE_256]byte
- ctx: Context_256
- ctx.md_bits = 256
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
+// Context_512 is a SHA-384, SHA-512 or SHA-512/256 instance.
+Context_512 :: struct {
+ block: [BLOCK_SIZE_512]byte,
+ h: [8]u64,
+ bitlength: u64,
+ length: u64,
+ md_bits: int,
-// hash_string_to_buffer_256 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_256 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_256(transmute([]byte)(data), hash)
+ is_initialized: bool,
}
-// hash_bytes_to_buffer_256 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_256 :: proc(data, hash: []byte) {
- ctx: Context_256
- ctx.md_bits = 256
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
+// init_224 initializes a Context_256 for SHA-224.
+init_224 :: proc(ctx: ^Context_256) {
+ ctx.md_bits = 224
+ _init(ctx)
}
-// hash_stream_256 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_256 :: proc(s: io.Stream) -> ([DIGEST_SIZE_256]byte, bool) {
- hash: [DIGEST_SIZE_256]byte
- ctx: Context_256
+// init_256 initializes a Context_256 for SHA-256.
+init_256 :: proc(ctx: ^Context_256) {
ctx.md_bits = 256
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
+ _init(ctx)
}
-// hash_file_256 will read the file provided by the given handle
-// and compute a hash
-hash_file_256 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_256]byte, bool) {
- if !load_at_once {
- return hash_stream_256(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_256(buf[:]), ok
- }
- }
- return [DIGEST_SIZE_256]byte{}, false
-}
-
-hash_256 :: proc {
- hash_stream_256,
- hash_file_256,
- hash_bytes_256,
- hash_string_256,
- hash_bytes_to_buffer_256,
- hash_string_to_buffer_256,
-}
-
-// hash_string_384 will hash the given input and return the
-// computed hash
-hash_string_384 :: proc(data: string) -> [DIGEST_SIZE_384]byte {
- return hash_bytes_384(transmute([]byte)(data))
-}
-
-// hash_bytes_384 will hash the given input and return the
-// computed hash
-hash_bytes_384 :: proc(data: []byte) -> [DIGEST_SIZE_384]byte {
- hash: [DIGEST_SIZE_384]byte
- ctx: Context_512
- ctx.md_bits = 384
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer_384 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_384 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_384(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer_384 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_384 :: proc(data, hash: []byte) {
- ctx: Context_512
- ctx.md_bits = 384
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
-
-// hash_stream_384 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_384 :: proc(s: io.Stream) -> ([DIGEST_SIZE_384]byte, bool) {
- hash: [DIGEST_SIZE_384]byte
- ctx: Context_512
+// init_384 initializes a Context_512 for SHA-384.
+init_384 :: proc(ctx: ^Context_512) {
ctx.md_bits = 384
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
-}
-
-// hash_file_384 will read the file provided by the given handle
-// and compute a hash
-hash_file_384 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_384]byte, bool) {
- if !load_at_once {
- return hash_stream_384(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_384(buf[:]), ok
- }
- }
- return [DIGEST_SIZE_384]byte{}, false
+ _init(ctx)
}
-hash_384 :: proc {
- hash_stream_384,
- hash_file_384,
- hash_bytes_384,
- hash_string_384,
- hash_bytes_to_buffer_384,
- hash_string_to_buffer_384,
-}
-
-// hash_string_512 will hash the given input and return the
-// computed hash
-hash_string_512 :: proc(data: string) -> [DIGEST_SIZE_512]byte {
- return hash_bytes_512(transmute([]byte)(data))
-}
-
-// hash_bytes_512 will hash the given input and return the
-// computed hash
-hash_bytes_512 :: proc(data: []byte) -> [DIGEST_SIZE_512]byte {
- hash: [DIGEST_SIZE_512]byte
- ctx: Context_512
- ctx.md_bits = 512
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer_512 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_512 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_512(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer_512 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_512 :: proc(data, hash: []byte) {
- ctx: Context_512
+// init_512 initializes a Context_512 for SHA-512.
+init_512 :: proc(ctx: ^Context_512) {
ctx.md_bits = 512
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
+ _init(ctx)
}
-// hash_stream_512 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_512 :: proc(s: io.Stream) -> ([DIGEST_SIZE_512]byte, bool) {
- hash: [DIGEST_SIZE_512]byte
- ctx: Context_512
- ctx.md_bits = 512
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
-}
-
-// hash_file_512 will read the file provided by the given handle
-// and compute a hash
-hash_file_512 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_512]byte, bool) {
- if !load_at_once {
- return hash_stream_512(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_512(buf[:]), ok
- }
- }
- return [DIGEST_SIZE_512]byte{}, false
-}
-
-hash_512 :: proc {
- hash_stream_512,
- hash_file_512,
- hash_bytes_512,
- hash_string_512,
- hash_bytes_to_buffer_512,
- hash_string_to_buffer_512,
-}
-
-// hash_string_512_256 will hash the given input and return the
-// computed hash
-hash_string_512_256 :: proc(data: string) -> [DIGEST_SIZE_512_256]byte {
- return hash_bytes_512_256(transmute([]byte)(data))
-}
-
-// hash_bytes_512_256 will hash the given input and return the
-// computed hash
-hash_bytes_512_256 :: proc(data: []byte) -> [DIGEST_SIZE_512_256]byte {
- hash: [DIGEST_SIZE_512_256]byte
- ctx: Context_512
- ctx.md_bits = 256
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer_512_256 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_512_256 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_512_256(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer_512_256 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_512_256 :: proc(data, hash: []byte) {
- ctx: Context_512
- ctx.md_bits = 256
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
-
-// hash_stream_512_256 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_512_256 :: proc(s: io.Stream) -> ([DIGEST_SIZE_512_256]byte, bool) {
- hash: [DIGEST_SIZE_512_256]byte
- ctx: Context_512
+// init_512_256 initializes a Context_512 for SHA-512/256.
+init_512_256 :: proc(ctx: ^Context_512) {
ctx.md_bits = 256
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
+ _init(ctx)
}
-// hash_file_512_256 will read the file provided by the given handle
-// and compute a hash
-hash_file_512_256 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_512_256]byte, bool) {
- if !load_at_once {
- return hash_stream_512_256(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_512_256(buf[:]), ok
- }
- }
- return [DIGEST_SIZE_512_256]byte{}, false
-}
-
-hash_512_256 :: proc {
- hash_stream_512_256,
- hash_file_512_256,
- hash_bytes_512_256,
- hash_string_512_256,
- hash_bytes_to_buffer_512_256,
- hash_string_to_buffer_512_256,
-}
-
-/*
- Low level API
-*/
-
-init :: proc(ctx: ^$T) {
+@(private)
+_init :: proc(ctx: ^$T) {
when T == Context_256 {
switch ctx.md_bits {
case 224:
@@ -497,13 +156,14 @@ init :: proc(ctx: ^$T) {
ctx.is_initialized = true
}
+// update adds more data to the Context.
update :: proc(ctx: ^$T, data: []byte) {
assert(ctx.is_initialized)
when T == Context_256 {
- CURR_BLOCK_SIZE :: SHA256_BLOCK_SIZE
+ CURR_BLOCK_SIZE :: BLOCK_SIZE_256
} else when T == Context_512 {
- CURR_BLOCK_SIZE :: SHA512_BLOCK_SIZE
+ CURR_BLOCK_SIZE :: BLOCK_SIZE_512
}
data := data
@@ -528,21 +188,34 @@ update :: proc(ctx: ^$T, data: []byte) {
}
}
-final :: proc(ctx: ^$T, hash: []byte) {
+// final finalizes the Context, writes the digest to hash, and calls
+// reset on the Context.
+//
+// Iff finalize_clone is set, final will work on a copy of the Context,
+// which is useful for for calculating rolling digests.
+final :: proc(ctx: ^$T, hash: []byte, finalize_clone: bool = false) {
assert(ctx.is_initialized)
if len(hash) * 8 < ctx.md_bits {
panic("crypto/sha2: invalid destination digest size")
}
+ ctx := ctx
+ if finalize_clone {
+ tmp_ctx: T
+ clone(&tmp_ctx, ctx)
+ ctx = &tmp_ctx
+ }
+ defer(reset(ctx))
+
length := ctx.length
- raw_pad: [SHA512_BLOCK_SIZE]byte
+ raw_pad: [BLOCK_SIZE_512]byte
when T == Context_256 {
- CURR_BLOCK_SIZE :: SHA256_BLOCK_SIZE
+ CURR_BLOCK_SIZE :: BLOCK_SIZE_256
pm_len := 8 // 64-bits for length
} else when T == Context_512 {
- CURR_BLOCK_SIZE :: SHA512_BLOCK_SIZE
+ CURR_BLOCK_SIZE :: BLOCK_SIZE_512
pm_len := 16 // 128-bits for length
}
pad := raw_pad[:CURR_BLOCK_SIZE]
@@ -576,37 +249,27 @@ final :: proc(ctx: ^$T, hash: []byte) {
endian.unchecked_put_u64be(hash[i * 8:], ctx.h[i])
}
}
-
- ctx.is_initialized = false
}
-/*
- SHA2 implementation
-*/
-
-SHA256_BLOCK_SIZE :: 64
-SHA512_BLOCK_SIZE :: 128
-
-Context_256 :: struct {
- block: [SHA256_BLOCK_SIZE]byte,
- h: [8]u32,
- bitlength: u64,
- length: u64,
- md_bits: int,
-
- is_initialized: bool,
+// clone clones the Context other into ctx.
+clone :: proc(ctx, other: ^$T) {
+ ctx^ = other^
}
-Context_512 :: struct {
- block: [SHA512_BLOCK_SIZE]byte,
- h: [8]u64,
- bitlength: u64,
- length: u64,
- md_bits: int,
+// reset sanitizes the Context. The Context must be re-initialized to
+// be used again.
+reset :: proc(ctx: ^$T) {
+ if !ctx.is_initialized {
+ return
+ }
- is_initialized: bool,
+ mem.zero_explicit(ctx, size_of(ctx^))
}
+/*
+ SHA2 implementation
+*/
+
@(private)
sha256_k := [64]u32 {
0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5,
@@ -737,12 +400,12 @@ sha2_transf :: proc "contextless" (ctx: ^$T, data: []byte) {
w: [64]u32
wv: [8]u32
t1, t2: u32
- CURR_BLOCK_SIZE :: SHA256_BLOCK_SIZE
+ CURR_BLOCK_SIZE :: BLOCK_SIZE_256
} else when T == Context_512 {
w: [80]u64
wv: [8]u64
t1, t2: u64
- CURR_BLOCK_SIZE :: SHA512_BLOCK_SIZE
+ CURR_BLOCK_SIZE :: BLOCK_SIZE_512
}
data := data
diff --git a/core/crypto/sha3/sha3.odin b/core/crypto/sha3/sha3.odin
index f91baad3d..87ff9c9cb 100644
--- a/core/crypto/sha3/sha3.odin
+++ b/core/crypto/sha3/sha3.odin
@@ -1,3 +1,13 @@
+/*
+package sha3 implements the SHA3 hash algorithm family.
+
+The SHAKE XOF can be found in crypto/shake. While discouraged if the
+pre-standardization Keccak algorithm is required, it can be found in
+crypto/legacy/keccak.
+
+See:
+- https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.202.pdf
+*/
package sha3
/*
@@ -6,359 +16,81 @@ package sha3
List of contributors:
zhibog, dotbmp: Initial implementation.
-
- Interface for the SHA3 hashing algorithm. The SHAKE functionality can be found in package shake.
- If you wish to compute a Keccak hash, you can use the keccak package, it will use the original padding.
*/
-import "core:io"
-import "core:os"
-
import "../_sha3"
-/*
- High level API
-*/
-
+// DIGEST_SIZE_224 is the SHA3-224 digest size.
DIGEST_SIZE_224 :: 28
+// DIGEST_SIZE_256 is the SHA3-256 digest size.
DIGEST_SIZE_256 :: 32
+// DIGEST_SIZE_384 is the SHA3-384 digest size.
DIGEST_SIZE_384 :: 48
+// DIGEST_SIZE_512 is the SHA3-512 digest size.
DIGEST_SIZE_512 :: 64
-// hash_string_224 will hash the given input and return the
-// computed hash
-hash_string_224 :: proc(data: string) -> [DIGEST_SIZE_224]byte {
- return hash_bytes_224(transmute([]byte)(data))
-}
+// BLOCK_SIZE_224 is the SHA3-224 block size in bytes.
+BLOCK_SIZE_224 :: _sha3.RATE_224
+// BLOCK_SIZE_256 is the SHA3-256 block size in bytes.
+BLOCK_SIZE_256 :: _sha3.RATE_256
+// BLOCK_SIZE_384 is the SHA3-384 block size in bytes.
+BLOCK_SIZE_384 :: _sha3.RATE_384
+// BLOCK_SIZE_512 is the SHA3-512 block size in bytes.
+BLOCK_SIZE_512 :: _sha3.RATE_512
-// hash_bytes_224 will hash the given input and return the
-// computed hash
-hash_bytes_224 :: proc(data: []byte) -> [DIGEST_SIZE_224]byte {
- hash: [DIGEST_SIZE_224]byte
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_224
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
+// Context is a SHA3 instance.
+Context :: distinct _sha3.Context
-// hash_string_to_buffer_224 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_224 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_224(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer_224 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_224 :: proc(data, hash: []byte) {
- ctx: Context
+// init_224 initializes a Context for SHA3-224.
+init_224 :: proc(ctx: ^Context) {
ctx.mdlen = DIGEST_SIZE_224
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
-
-// hash_stream_224 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_224 :: proc(s: io.Stream) -> ([DIGEST_SIZE_224]byte, bool) {
- hash: [DIGEST_SIZE_224]byte
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_224
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
+ _init(ctx)
}
-// hash_file_224 will read the file provided by the given handle
-// and compute a hash
-hash_file_224 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_224]byte, bool) {
- if !load_at_once {
- return hash_stream_224(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_224(buf[:]), ok
- }
- }
- return [DIGEST_SIZE_224]byte{}, false
-}
-
-hash_224 :: proc {
- hash_stream_224,
- hash_file_224,
- hash_bytes_224,
- hash_string_224,
- hash_bytes_to_buffer_224,
- hash_string_to_buffer_224,
-}
-
-// hash_string_256 will hash the given input and return the
-// computed hash
-hash_string_256 :: proc(data: string) -> [DIGEST_SIZE_256]byte {
- return hash_bytes_256(transmute([]byte)(data))
-}
-
-// hash_bytes_256 will hash the given input and return the
-// computed hash
-hash_bytes_256 :: proc(data: []byte) -> [DIGEST_SIZE_256]byte {
- hash: [DIGEST_SIZE_256]byte
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_256
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer_256 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_256 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_256(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer_256 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_256 :: proc(data, hash: []byte) {
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_256
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
-
-// hash_stream_256 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_256 :: proc(s: io.Stream) -> ([DIGEST_SIZE_256]byte, bool) {
- hash: [DIGEST_SIZE_256]byte
- ctx: Context
+// init_256 initializes a Context for SHA3-256.
+init_256 :: proc(ctx: ^Context) {
ctx.mdlen = DIGEST_SIZE_256
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
-}
-
-// hash_file_256 will read the file provided by the given handle
-// and compute a hash
-hash_file_256 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_256]byte, bool) {
- if !load_at_once {
- return hash_stream_256(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_256(buf[:]), ok
- }
- }
- return [DIGEST_SIZE_256]byte{}, false
+ _init(ctx)
}
-hash_256 :: proc {
- hash_stream_256,
- hash_file_256,
- hash_bytes_256,
- hash_string_256,
- hash_bytes_to_buffer_256,
- hash_string_to_buffer_256,
-}
-
-// hash_string_384 will hash the given input and return the
-// computed hash
-hash_string_384 :: proc(data: string) -> [DIGEST_SIZE_384]byte {
- return hash_bytes_384(transmute([]byte)(data))
-}
-
-// hash_bytes_384 will hash the given input and return the
-// computed hash
-hash_bytes_384 :: proc(data: []byte) -> [DIGEST_SIZE_384]byte {
- hash: [DIGEST_SIZE_384]byte
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_384
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer_384 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_384 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_384(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer_384 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_384 :: proc(data, hash: []byte) {
- ctx: Context
+// init_384 initializes a Context for SHA3-384.
+init_384 :: proc(ctx: ^Context) {
ctx.mdlen = DIGEST_SIZE_384
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
-
-// hash_stream_384 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_384 :: proc(s: io.Stream) -> ([DIGEST_SIZE_384]byte, bool) {
- hash: [DIGEST_SIZE_384]byte
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_384
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
-}
-
-// hash_file_384 will read the file provided by the given handle
-// and compute a hash
-hash_file_384 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_384]byte, bool) {
- if !load_at_once {
- return hash_stream_384(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_384(buf[:]), ok
- }
- }
- return [DIGEST_SIZE_384]byte{}, false
-}
-
-hash_384 :: proc {
- hash_stream_384,
- hash_file_384,
- hash_bytes_384,
- hash_string_384,
- hash_bytes_to_buffer_384,
- hash_string_to_buffer_384,
+ _init(ctx)
}
-// hash_string_512 will hash the given input and return the
-// computed hash
-hash_string_512 :: proc(data: string) -> [DIGEST_SIZE_512]byte {
- return hash_bytes_512(transmute([]byte)(data))
-}
-
-// hash_bytes_512 will hash the given input and return the
-// computed hash
-hash_bytes_512 :: proc(data: []byte) -> [DIGEST_SIZE_512]byte {
- hash: [DIGEST_SIZE_512]byte
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_512
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer_512 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_512 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_512(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer_512 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_512 :: proc(data, hash: []byte) {
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_512
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
-
-// hash_stream_512 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_512 :: proc(s: io.Stream) -> ([DIGEST_SIZE_512]byte, bool) {
- hash: [DIGEST_SIZE_512]byte
- ctx: Context
+// init_512 initializes a Context for SHA3-512.
+init_512 :: proc(ctx: ^Context) {
ctx.mdlen = DIGEST_SIZE_512
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
+ _init(ctx)
}
-// hash_file_512 will read the file provided by the given handle
-// and compute a hash
-hash_file_512 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_512]byte, bool) {
- if !load_at_once {
- return hash_stream_512(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_512(buf[:]), ok
- }
- }
- return [DIGEST_SIZE_512]byte{}, false
+@(private)
+_init :: proc(ctx: ^Context) {
+ _sha3.init(transmute(^_sha3.Context)(ctx))
}
-hash_512 :: proc {
- hash_stream_512,
- hash_file_512,
- hash_bytes_512,
- hash_string_512,
- hash_bytes_to_buffer_512,
- hash_string_to_buffer_512,
+// update adds more data to the Context.
+update :: proc(ctx: ^Context, data: []byte) {
+ _sha3.update(transmute(^_sha3.Context)(ctx), data)
}
-/*
- Low level API
-*/
-
-Context :: _sha3.Sha3_Context
-
-init :: proc(ctx: ^Context) {
- _sha3.init(ctx)
+// final finalizes the Context, writes the digest to hash, and calls
+// reset on the Context.
+//
+// Iff finalize_clone is set, final will work on a copy of the Context,
+// which is useful for for calculating rolling digests.
+final :: proc(ctx: ^Context, hash: []byte, finalize_clone: bool = false) {
+ _sha3.final(transmute(^_sha3.Context)(ctx), hash, finalize_clone)
}
-update :: proc(ctx: ^Context, data: []byte) {
- _sha3.update(ctx, data)
+// clone clones the Context other into ctx.
+clone :: proc(ctx, other: ^Context) {
+ _sha3.clone(transmute(^_sha3.Context)(ctx), transmute(^_sha3.Context)(other))
}
-final :: proc(ctx: ^Context, hash: []byte) {
- _sha3.final(ctx, hash)
+// reset sanitizes the Context. The Context must be re-initialized to
+// be used again.
+reset :: proc(ctx: ^Context) {
+ _sha3.reset(transmute(^_sha3.Context)(ctx))
}
diff --git a/core/crypto/shake/shake.odin b/core/crypto/shake/shake.odin
index e4b4c1e31..072204800 100644
--- a/core/crypto/shake/shake.odin
+++ b/core/crypto/shake/shake.odin
@@ -1,3 +1,11 @@
+/*
+package shake implements the SHAKE XOF algorithm family.
+
+The SHA3 hash algorithm can be found in the crypto/sha3.
+
+See:
+- https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.202.pdf
+*/
package shake
/*
@@ -6,201 +14,55 @@ package shake
List of contributors:
zhibog, dotbmp: Initial implementation.
-
- Interface for the SHAKE hashing algorithm.
- The SHA3 functionality can be found in package sha3.
-
- TODO: This should provide an incremental squeeze interface, in addition
- to the one-shot final call.
*/
-import "core:io"
-import "core:os"
-
import "../_sha3"
-/*
- High level API
-*/
-
-DIGEST_SIZE_128 :: 16
-DIGEST_SIZE_256 :: 32
-
-// hash_string_128 will hash the given input and return the
-// computed hash
-hash_string_128 :: proc(data: string) -> [DIGEST_SIZE_128]byte {
- return hash_bytes_128(transmute([]byte)(data))
-}
-
-// hash_bytes_128 will hash the given input and return the
-// computed hash
-hash_bytes_128 :: proc(data: []byte) -> [DIGEST_SIZE_128]byte {
- hash: [DIGEST_SIZE_128]byte
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_128
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer_128 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_128 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_128(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer_128 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_128 :: proc(data, hash: []byte) {
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_128
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
-
-// hash_stream_128 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_128 :: proc(s: io.Stream) -> ([DIGEST_SIZE_128]byte, bool) {
- hash: [DIGEST_SIZE_128]byte
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_128
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
+// Context is a SHAKE128 or SHAKE256 instance.
+Context :: distinct _sha3.Context
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
-}
-
-// hash_file_128 will read the file provided by the given handle
-// and compute a hash
-hash_file_128 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_128]byte, bool) {
- if !load_at_once {
- return hash_stream_128(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_128(buf[:]), ok
- }
- }
- return [DIGEST_SIZE_128]byte{}, false
-}
-
-hash_128 :: proc {
- hash_stream_128,
- hash_file_128,
- hash_bytes_128,
- hash_string_128,
- hash_bytes_to_buffer_128,
- hash_string_to_buffer_128,
-}
-
-// hash_string_256 will hash the given input and return the
-// computed hash
-hash_string_256 :: proc(data: string) -> [DIGEST_SIZE_256]byte {
- return hash_bytes_256(transmute([]byte)(data))
-}
-
-// hash_bytes_256 will hash the given input and return the
-// computed hash
-hash_bytes_256 :: proc(data: []byte) -> [DIGEST_SIZE_256]byte {
- hash: [DIGEST_SIZE_256]byte
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_256
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
+// init_128 initializes a Context for SHAKE128.
+init_128 :: proc(ctx: ^Context) {
+ ctx.mdlen = 128 / 8
+ _init(ctx)
}
-// hash_string_to_buffer_256 will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer_256 :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer_256(transmute([]byte)(data), hash)
+// init_256 initializes a Context for SHAKE256.
+init_256 :: proc(ctx: ^Context) {
+ ctx.mdlen = 256 / 8
+ _init(ctx)
}
-// hash_bytes_to_buffer_256 will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer_256 :: proc(data, hash: []byte) {
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_256
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
+@(private)
+_init :: proc(ctx: ^Context) {
+ _sha3.init(transmute(^_sha3.Context)(ctx))
}
-// hash_stream_256 will read the stream in chunks and compute a
-// hash from its contents
-hash_stream_256 :: proc(s: io.Stream) -> ([DIGEST_SIZE_256]byte, bool) {
- hash: [DIGEST_SIZE_256]byte
- ctx: Context
- ctx.mdlen = DIGEST_SIZE_256
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
+// write writes more data into the SHAKE instance. This MUST not be called
+// after any reads have been done, and attempts to do so will panic.
+write :: proc(ctx: ^Context, data: []byte) {
+ _sha3.update(transmute(^_sha3.Context)(ctx), data)
}
-// hash_file_256 will read the file provided by the given handle
-// and compute a hash
-hash_file_256 :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE_256]byte, bool) {
- if !load_at_once {
- return hash_stream_256(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes_256(buf[:]), ok
- }
+// read reads output from the SHAKE instance. There is no practical upper
+// limit to the amount of data that can be read from SHAKE. After read has
+// been called one or more times, further calls to write will panic.
+read :: proc(ctx: ^Context, dst: []byte) {
+ ctx_ := transmute(^_sha3.Context)(ctx)
+ if !ctx.is_finalized {
+ _sha3.shake_xof(ctx_)
}
- return [DIGEST_SIZE_256]byte{}, false
-}
-
-hash_256 :: proc {
- hash_stream_256,
- hash_file_256,
- hash_bytes_256,
- hash_string_256,
- hash_bytes_to_buffer_256,
- hash_string_to_buffer_256,
-}
-
-/*
- Low level API
-*/
-
-Context :: _sha3.Sha3_Context
-init :: proc(ctx: ^Context) {
- _sha3.init(ctx)
+ _sha3.shake_out(ctx_, dst)
}
-update :: proc(ctx: ^Context, data: []byte) {
- _sha3.update(ctx, data)
+// clone clones the Context other into ctx.
+clone :: proc(ctx, other: ^Context) {
+ _sha3.clone(transmute(^_sha3.Context)(ctx), transmute(^_sha3.Context)(other))
}
-final :: proc(ctx: ^Context, hash: []byte) {
- _sha3.shake_xof(ctx)
- _sha3.shake_out(ctx, hash[:])
+// reset sanitizes the Context. The Context must be re-initialized to
+// be used again.
+reset :: proc(ctx: ^Context) {
+ _sha3.reset(transmute(^_sha3.Context)(ctx))
}
diff --git a/core/crypto/sm3/sm3.odin b/core/crypto/sm3/sm3.odin
index 7a7a0b8a6..2faf37380 100644
--- a/core/crypto/sm3/sm3.odin
+++ b/core/crypto/sm3/sm3.odin
@@ -1,3 +1,9 @@
+/*
+package sm3 implements the SM3 hash algorithm.
+
+See:
+- https://datatracker.ietf.org/doc/html/draft-sca-cfrg-sm3-02
+*/
package sm3
/*
@@ -6,102 +12,29 @@ package sm3
List of contributors:
zhibog, dotbmp: Initial implementation.
-
- Implementation of the SM3 hashing algorithm, as defined in <https://datatracker.ietf.org/doc/html/draft-sca-cfrg-sm3-02>
*/
import "core:encoding/endian"
-import "core:io"
import "core:math/bits"
-import "core:os"
-
-/*
- High level API
-*/
+import "core:mem"
+// DIGEST_SIZE is the SM3 digest size in bytes.
DIGEST_SIZE :: 32
-// hash_string will hash the given input and return the
-// computed hash
-hash_string :: proc(data: string) -> [DIGEST_SIZE]byte {
- return hash_bytes(transmute([]byte)(data))
-}
-
-// hash_bytes will hash the given input and return the
-// computed hash
-hash_bytes :: proc(data: []byte) -> [DIGEST_SIZE]byte {
- hash: [DIGEST_SIZE]byte
- ctx: Context
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash[:])
- return hash
-}
-
-// hash_string_to_buffer will hash the given input and assign the
-// computed hash to the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_string_to_buffer :: proc(data: string, hash: []byte) {
- hash_bytes_to_buffer(transmute([]byte)(data), hash)
-}
-
-// hash_bytes_to_buffer will hash the given input and write the
-// computed hash into the second parameter.
-// It requires that the destination buffer is at least as big as the digest size
-hash_bytes_to_buffer :: proc(data, hash: []byte) {
- ctx: Context
- init(&ctx)
- update(&ctx, data)
- final(&ctx, hash)
-}
-
-// hash_stream will read the stream in chunks and compute a
-// hash from its contents
-hash_stream :: proc(s: io.Stream) -> ([DIGEST_SIZE]byte, bool) {
- hash: [DIGEST_SIZE]byte
- ctx: Context
- init(&ctx)
-
- buf := make([]byte, 512)
- defer delete(buf)
-
- read := 1
- for read > 0 {
- read, _ = io.read(s, buf)
- if read > 0 {
- update(&ctx, buf[:read])
- }
- }
- final(&ctx, hash[:])
- return hash, true
-}
+// BLOCK_SIZE is the SM3 block size in bytes.
+BLOCK_SIZE :: 64
-// hash_file will read the file provided by the given handle
-// and compute a hash
-hash_file :: proc(hd: os.Handle, load_at_once := false) -> ([DIGEST_SIZE]byte, bool) {
- if !load_at_once {
- return hash_stream(os.stream_from_handle(hd))
- } else {
- if buf, ok := os.read_entire_file(hd); ok {
- return hash_bytes(buf[:]), ok
- }
- }
- return [DIGEST_SIZE]byte{}, false
-}
+// Context is a SM3 instance.
+Context :: struct {
+ state: [8]u32,
+ x: [BLOCK_SIZE]byte,
+ bitlength: u64,
+ length: u64,
-hash :: proc {
- hash_stream,
- hash_file,
- hash_bytes,
- hash_string,
- hash_bytes_to_buffer,
- hash_string_to_buffer,
+ is_initialized: bool,
}
-/*
- Low level API
-*/
-
+// init initializes a Context.
init :: proc(ctx: ^Context) {
ctx.state[0] = IV[0]
ctx.state[1] = IV[1]
@@ -118,6 +51,7 @@ init :: proc(ctx: ^Context) {
ctx.is_initialized = true
}
+// update adds more data to the Context.
update :: proc(ctx: ^Context, data: []byte) {
assert(ctx.is_initialized)
@@ -143,13 +77,26 @@ update :: proc(ctx: ^Context, data: []byte) {
}
}
-final :: proc(ctx: ^Context, hash: []byte) {
+// final finalizes the Context, writes the digest to hash, and calls
+// reset on the Context.
+//
+// Iff finalize_clone is set, final will work on a copy of the Context,
+// which is useful for for calculating rolling digests.
+final :: proc(ctx: ^Context, hash: []byte, finalize_clone: bool = false) {
assert(ctx.is_initialized)
if len(hash) < DIGEST_SIZE {
panic("crypto/sm3: invalid destination digest size")
}
+ ctx := ctx
+ if finalize_clone {
+ tmp_ctx: Context
+ clone(&tmp_ctx, ctx)
+ ctx = &tmp_ctx
+ }
+ defer(reset(ctx))
+
length := ctx.length
pad: [BLOCK_SIZE]byte
@@ -168,25 +115,27 @@ final :: proc(ctx: ^Context, hash: []byte) {
for i := 0; i < DIGEST_SIZE / 4; i += 1 {
endian.unchecked_put_u32be(hash[i * 4:], ctx.state[i])
}
+}
- ctx.is_initialized = false
+// clone clones the Context other into ctx.
+clone :: proc(ctx, other: ^Context) {
+ ctx^ = other^
+}
+
+// reset sanitizes the Context. The Context must be re-initialized to
+// be used again.
+reset :: proc(ctx: ^Context) {
+ if !ctx.is_initialized {
+ return
+ }
+
+ mem.zero_explicit(ctx, size_of(ctx^))
}
/*
SM3 implementation
*/
-BLOCK_SIZE :: 64
-
-Context :: struct {
- state: [8]u32,
- x: [BLOCK_SIZE]byte,
- bitlength: u64,
- length: u64,
-
- is_initialized: bool,
-}
-
@(private)
IV := [8]u32 {
0x7380166f, 0x4914b2b9, 0x172442d7, 0xda8a0600,