Commit Graph

49 Commits

Author SHA1 Message Date
477e13afd0 Add missing zig_cc_binary import to the simple layer example in the documentation. 2023-04-24 10:04:50 +00:00
ed6444b775 Add Tensor.concatenate support, begin deprecating broadcastLeft, and compute transformer head scaling constant in f32 for higher precision. 2023-04-21 15:55:07 +00:00
11006ca08d Refactor torch module: merge PickleData into Parser as torch.File, rename value file to py_object.zig, use buffered reader for pickle and zip headers, adjust intermediate result handling, simplify Python dict representation, separate kwargs from args, and add extensive tests for long integers, protocol 0, zipped pickle, and a complex PyTorch Conv2d case; also streamline BufferStore initialization. 2023-04-20 15:43:18 +00:00
837f8fb111 Add support for the Llama 3.1 70B Instruct model to facilitate testing on high‑performance accelerators. 2023-04-19 10:23:44 +00:00
fdb7da5c9b Introduce sharding attributes to Llama weights to enable Tensor Parallelism. 2023-04-13 12:35:27 +00:00
833ff5f28d Upgrade PJRT CUDA Plugin to version 0.2.3, adding NCCL support for correct sharding. 2023-04-12 15:47:06 +00:00
8e43a45a3c Add event waiting when invoking a module and improve multi‑device sharding handling. 2023-04-11 11:32:09 +00:00
0189b71070 Rename zml.aio.Value to zml.aio.Metadata, simplify its type variants, and update torch pickle/eval APIs accordingly. 2023-04-07 16:45:58 +00:00
aea23c720e Update Llama example to use renamed zml.aio.Metadata (formerly Value) and reflect torch loader changes. 2023-04-05 14:09:59 +00:00
e25f70d923 Rename and simplify modules in zml/aio/torch: replace redundant qualified names, remove generic utilities, inline code, reorder functions for top‑to‑bottom readability, and extract parsing logic into parseTensor and parseStorage functions. 2023-04-04 17:20:53 +00:00
66881899ca Fix testLayer by removing unnecessary compile_options argument and updating testing logic for new sharded output, ensuring proper usage by llama.zig. 2023-03-31 14:23:45 +00:00
05d23beb23 Add Normalizer.fromHfJson to read HuggingFace tokenizer JSON and map to internal options, including a configurable magic space token and a debug flag for token merges. Adjust default handling of extra whitespaces to align with HF defaults. 2023-03-29 16:10:29 +00:00
ef922e3aea Fix empty JSON array handling in safetensor metadata loader and refactor torch loader (make ops slices const and improve readability). 2023-03-28 16:17:00 +00:00
aae37738a5 Update loader example to demonstrate handling of empty JSON arrays and improved torch loader readability 2023-03-22 14:52:33 +00:00
a4f0fc96c0 Integrate user sharding hints and HLO sharding annotations across MLIR dialects and ZML core, and remove the now‑unused module options arguments. 2023-03-21 10:50:39 +00:00
e30e35deeb Update benchmark example to use new user sharding hints and drop deprecated module options. 2023-03-20 15:31:44 +00:00
8746a5ce78 Expose zml/test_runner.zig publicly to enable users to employ the async test runner. Made the dependency on zml explicit and suggest treating test_runner as a zig_library rather than a filegroup. 2023-03-16 13:22:35 +00:00
fe531aef06 Clarify HuggingFace token handling in workspace, noting the standard CLI location and adding support for an environment variable. 2023-03-14 15:28:03 +00:00
cd2f2209d0 Create token directory if it does not exist. 2023-03-13 15:31:13 +00:00
70d40208a2 runtimes/cuda: Fix version variable definitions in the build script to enable successful CUDA builds. 2023-03-09 11:31:02 +00:00
7ef67eea27 zml: Relocate tests next to the functions they verify and remove obsolete dynamicSlice1d test. 2023-03-08 14:10:11 +00:00
dfa71018a5 zml: Remove pjrtx wrapper, migrate remaining helpers to their native modules, and fix blocking issue in Event.await. 2023-03-06 17:05:56 +00:00
0c126c2e12 runtimes/cuda: Upgrade CUDA to 12.6.2 and cuDNN to 9.4.0. 2023-03-03 15:17:26 +00:00
f595d22134 runtimes/rocm: Upgrade ROCm to version 6.2.2. 2023-03-01 13:15:50 +00:00
ecf52ad724 zml.tokenizer: Implement proper byte fallback support by converting hex byte strings (e.g., “<0x40>”) to their characters and splitting unknown UTF‑8 codepoints into bytes, fixing tokenization. 2023-02-28 14:40:25 +00:00
2f129f76c9 Add in-process sharding support across core ZML components (platform, shape, tensor, MLIR generation, buffers, and PJRT integration) 2023-02-24 17:33:14 +00:00
cad1a688da Add sharding usage to the benchmark and simple_layer example programs. 2023-02-23 11:18:27 +00:00
fc718ab649 Add StableHLO bindings for versioning functions, enabling portable serialization of StableHLO. 2023-02-22 15:41:33 +00:00
8fa3878fc3 PJRT: Add handling for rank‑0 case in getDimensions to avoid null pointer usage when num_dims is zero. 2023-02-17 10:47:15 +00:00
639f5cd994 Replace log with select for generating the attention mask to avoid NaNs on zero values. 2023-02-16 10:36:23 +00:00
24a7c98476 Implement scatterSlices functionality. 2023-02-14 13:52:49 +00:00
934acb35a8 zml: initialize Tensor.min and Tensor.max reductions with proper extreme values to ensure correct results 2023-02-10 12:28:41 +00:00
be6328813d zml: clean up dead and commented code; note that copyslice is currently broken and pending reimplementation 2023-02-08 17:13:47 +00:00
058e1415fa zml: deprecate buggy Tensor.chunk; introduce chunkExact and chunkAllowTrailing with clarified behavior 2023-02-07 12:42:34 +00:00
7e131a106b Update examples/MODULE.bazel.lock to reflect XLA version bump. 2023-02-03 14:13:21 +00:00
0606ea1d7c Update Bazel workspace and runtime BUILD files to newer XLA, StableHLO, and LLVM versions, enabling batching‑dims support for the gather operator. 2023-02-01 15:58:30 +00:00
897786e440 aio: correct refAllDecls handling for yaml and nemo modules 2023-01-31 11:58:58 +00:00
7dcd8b516c zml/nn: fix resize implementations (resizeBilinear and resizeBicubic) and expand refAllDecl usage; all tests pass 2023-01-27 14:35:11 +00:00
5e1688cbfd aio: refactor PyTorch model parsing for better readability and optimize slice handling 2023-01-25 12:16:27 +00:00
ebdb8db213 zml/tests: re‑enable all Zig tests, fix precision issue by switching to f32, and add refAllDecls to ensure all declarations are tested 2023-01-23 16:28:19 +00:00
f39b16e13d zml/test_runner: add optional filtering of test functions via command‑line argument, allowing selective execution of tests (e.g., bazel run //zml:test -- sdpa) 2023-01-20 13:50:36 +00:00
b961856e5f zml/tensor: correct typo in uniform comment ('substract' → 'subtract') 2023-01-19 12:20:40 +00:00
ccdf218961 Add multi‑axis, batched gatherValues support to tensor, shape, nn, quantization, and torch modules. 2023-01-18 12:03:48 +00:00
16e066ec69 Add llama example demonstrating the new gatherValues functionality. 2023-01-11 09:58:09 +00:00
48b671f100 Fix CollectionOver scope error in ActivationCollector and clean dead code/comments in zml_utils.py 2023-01-10 09:43:03 +00:00
04ad137417 Update howto_torch2zml docs to explain why the output variable can be None. 2023-01-09 17:05:09 +00:00
fab1c93d5b docs: first model – fix const/var bug and enforce 80‑column width 2023-01-06 10:34:44 +00:00
eded305649 Add initial documentation and example projects for ZML, covering how‑to guides, tutorials, and benchmark examples. 2023-01-03 10:21:07 +00:00
266da6d4be Add initial Bazel build configuration, async runtime implementation, and core MLIR dialect definitions for ZML. 2023-01-02 14:28:25 +00:00