aboutsummaryrefslogtreecommitdiff
path: root/core/bytes
Commit message (Collapse)AuthorAgeFilesLines
* wasm: support more vendor librariesLaytan Laats2024-09-091-2/+2
| | | | | | | | Adds support for: - box2d - cgltf - stb image - stb rect pack
* bytes: fix last_index_byte off-by-onelaytan2024-09-051-5/+3
|
* Return `0, nil` in all `io` cases where an empty slice is providedFeoramund2024-08-282-0/+12
|
* Check `int(abs)` instead to avoid overflowsFeoramund2024-08-281-2/+3
|
* Measure `bytes.Buffer` size by `length` instead of `capacity`Feoramund2024-08-281-1/+1
|
* Add `Seek` behavior to `bytes.Buffer`Feoramund2024-08-281-1/+24
|
* Don't invalidate `prev_rune` if `Reader` seek failedFeoramund2024-08-281-1/+1
|
* Return `.EOF` in `bytes.buffer_read_at` insteadFeoramund2024-08-281-1/+1
| | | | This is consistent with the other stream `read` procs
* Make `bytes.reader_init` return an `io.Stream`Feoramund2024-08-281-1/+2
| | | | Makes the API like the other stream `init` procs.
* Add missing `io.Stream_Mode` responsesFeoramund2024-08-281-1/+1
|
* fix simd var typoRory OConnell2024-08-191-1/+1
|
* core/bytes: Tweak `index_byte` and `last_index_byte`Yawning Angel2024-08-191-89/+223
| | | | | | | - Assume unaligned loads are cheap - Explicilty use 256-bit or 128-bit SIMD to avoid AVX512 - Limit "vectorized" scanning to 128-bits if SIMD is emulated via SWAR - Add a few more benchmark cases
* Minor style changegingerBill2024-08-131-6/+6
|
* Set `SIMD_SCAN_WIDTH` based on `size_of(uintptr)`Feoramund2024-08-101-8/+24
|
* Merge `core:simd/util` into `core:bytes`Feoramund2024-08-101-21/+130
|
* Use `for x in y` construct for `bytes` iterationFeoramund2024-08-091-4/+4
| | | | | This cannot be applied to the `strings` version, as that would cause a rune-by-rune iteration, not a byte-by-byte one.
* Make `simd_util` index procs `contextless` where applicableFeoramund2024-08-091-2/+2
|
* Simplify and make `simd_util` cross-platformFeoramund2024-08-091-14/+4
| | | | | | | | This new algorithm uses a Scalar->Vector->Scalar iteration loop which requires no masking off of any incomplete data chunks. Also, the width was reduced to 32 bytes instead of 64, as I found this to be about as fast as the previous 64-byte x86 version.
* Use vectorized `index_*` procs in `core`Feoramund2024-08-061-8/+39
|
* core/bytes: Add `alias` and `alias_inexactly`Yawning Angel2024-07-161-0/+25
|
* Update `tests\core\encoding\cbor` to use new test runner.Jeroen van Rijn2024-06-021-35/+35
| | | | | | It was leaky and required a substantial number of `loc := #caller_location` additions to parts of the core library to make it easier to track down how and where it leaked. The tests now run fine multi-threaded.
* improve some Negative_Read/Negative_Write logicLaytan Laats2024-04-251-1/+1
| | | | | | | | Returns the actual error if one is set, instead of swallowing it for the less descriptive negative error. Also fixes a out-of-bounds slice error in `bufio.writer_write` because it wasn't checking the returned `m`.
* Fix typo in bytes.scrubFourteenBrush2024-01-171-1/+1
|
* _buffer_grow: Preserve allocator if already set via init_buffer_allocatorJeroen van Rijn2023-08-181-1/+4
| | | | Fixes #2756
* Update to new io interfacegingerBill2023-06-082-99/+42
|
* fix bytes.buffer_init_allocator not using given allocator if len/cap is 0Laytan Laats2023-05-091-0/+5
|
* Use `uint` instead of `int` to improve code generation for bounds checkinggingerBill2022-09-272-6/+3
|
* Clean up of the core library to make the stream vtables not be pointers ↵gingerBill2022-09-152-4/+4
| | | | directly.
* Add `buffer_read_ptr` and `buffer_write_ptr`gingerBill2022-07-141-0/+8
|
* Convert all uses of `*_from_slice` to `*_from_bytes` where appropriategingerBill2022-05-161-1/+2
|
* Add _safe versionsgingerBill2022-05-121-0/+43
|
* Correct bytes._split_iteratorgingerBill2022-02-141-5/+5
|
* Correct _split_iteratorgingerBill2022-02-141-32/+8
|
* Remove the hidden NUL byte past the end from `bytes.clone`gingerBill2022-01-011-2/+1
|
* Fix `fields_proc` in `strings` and `bytes`gingerBill2021-12-111-1/+1
|
* Remove unneeded semicolons from the core librarygingerBill2021-08-313-685/+685
|
* Move `bytes` utils back to EXR code for the time being.Jeroen van Rijn2021-06-221-187/+0
| | | | Also, allow PNG example to be run directly from `core:image/png` directory.
* ZLIB: Start optimization.Jeroen van Rijn2021-06-211-4/+10
|
* Fix comment.Jeroen van Rijn2021-06-181-2/+2
|
* Add `bytes.buffer_create_of_type` and `bytes.buffer_convert_to_type`.Jeroen van Rijn2021-06-181-0/+181
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convenience functions to reinterpret or cast one buffer to another type, or create a buffer of a specific type. Example: ```odin fmt.println("Convert []f16le (x2) to []f32 (x2)."); b := []u8{0, 60, 0, 60}; // == []f16{1.0, 1.0} res, backing, had_to_allocate, err := bytes.buffer_convert_to_type(2, f32, f16le, b); fmt.printf("res : %v\n", res); // [1.000, 1.000] fmt.printf("backing : %v\n", backing); // &Buffer{buf = [0, 0, 128, 63, 0, 0, 128, 63], off = 0, last_read = Invalid} fmt.printf("allocated: %v\n", had_to_allocate); // true fmt.printf("err : %v\n", err); // false if had_to_allocate { defer bytes.buffer_destroy(backing); } fmt.println("\nConvert []f16le (x2) to []u16 (x2)."); res2: []u16; res2, backing, had_to_allocate, err = bytes.buffer_convert_to_type(2, u16, f16le, b); fmt.printf("res : %v\n", res2); // [15360, 15360] fmt.printf("backing : %v\n", backing); // Buffer.buf points to `b` because it could be converted in-place. fmt.printf("allocated: %v\n", had_to_allocate); // false fmt.printf("err : %v\n", err); // false if had_to_allocate { defer bytes.buffer_destroy(backing); } fmt.println("\nConvert []f16le (x2) to []u16 (x2), force_convert=true."); res2, backing, had_to_allocate, err = bytes.buffer_convert_to_type(2, u16, f16le, b, true); fmt.printf("res : %v\n", res2); // [1, 1] fmt.printf("backing : %v\n", backing); // Buffer.buf points to `b` because it could be converted in-place. fmt.printf("allocated: %v\n", had_to_allocate); // false fmt.printf("err : %v\n", err); // false if had_to_allocate { defer bytes.buffer_destroy(backing); } ```
* Core library clean up: Make range expressions more consistent and replace ↵gingerBill2021-06-141-1/+1
| | | | uses of `..` with `..=`
* Add `bytes.remove`, `bytes.remove_all`, `strings.remove`, `strings.remove_all`gingerBill2021-05-231-0/+8
|
* Add truncate_to_byte and truncate_to_rune for packages strings and bytesgingerBill2021-04-221-0/+15
|
* Add buffer_read_at buffer_write_atgingerBill2021-04-141-0/+42
|
* `split*_iterator` procedures for package bytes and stringsgingerBill2021-03-181-0/+110
|
* Replace usage of `inline proc` with `#force_inline proc` in the core librarygingerBill2021-02-231-4/+4
|
* Add `bytes.buffer_write_to` and `bytes.buffer_read_from`gingerBill2020-12-171-8/+54
|
* Make bytes.odin consistent with strings.odin in functionalitygingerBill2020-12-172-54/+49
|
* Rename bytes/strings.odin to bytes/bytes.odingingerBill2020-12-171-0/+0
|
* Minor correction to bytes.Buffer's vtablegingerBill2020-12-051-0/+4
|