API Reference
Home / icechunk-python / reference
IcechunkStore
Bases: Store
, SyncMixin
Source code in icechunk/__init__.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 |
|
branch: str | None
property
Return the current branch name.
has_uncommitted_changes: bool
property
Return True if there are uncommitted changes to the store
snapshot_id: str
property
Return the current snapshot id.
supports_listing: bool
property
Does the store support listing?
supports_partial_writes: bool
property
Does the store support partial writes?
supports_writes: bool
property
Does the store support writes?
__init__(store, read_only=False, *args, **kwargs)
Create a new IcechunkStore.
This should not be called directly, instead use the create
, open_existing
or open_or_create
class methods.
Source code in icechunk/__init__.py
ancestry()
as_read_only()
as_writeable()
async_ancestry()
Get the list of parents of the current version.
Returns:
Type | Description |
---|---|
AsyncGenerator[SnapshotMetadata, None] | |
async_checkout(snapshot_id=None, branch=None, tag=None)
async
Checkout a branch, tag, or specific snapshot.
If a branch is checked out, any following commit
attempts will update that branch reference if successful. If a tag or snapshot_id are checked out, the repository won't allow commits.
Source code in icechunk/__init__.py
async_commit(message)
async
Commit any uncommitted changes to the store.
This will create a new snapshot on the current branch and return the new snapshot id.
This method will fail if:
- there is no currently checked out branch
- some other writer updated the current branch since the repository was checked out
Source code in icechunk/__init__.py
async_merge(changes)
async
Merge the changes from another store into this store.
This will create a new snapshot on the current branch and return the new snapshot id.
This method will fail if:
- there is no currently checked out branch
- some other writer updated the current branch since the repository was checked out
The behavior is undefined if the stores applied conflicting changes.
Source code in icechunk/__init__.py
async_new_branch(branch_name)
async
Create a new branch pointing to the current checked out snapshot.
This requires having no uncommitted changes.
async_reset()
async
Pop any uncommitted changes and reset to the previous snapshot state.
Returns:
Name | Type | Description |
---|---|---|
bytes | The changes that were taken from the working set | |
async_reset_branch(to_snapshot)
async
Reset the currently checked out branch to point to a different snapshot.
This requires having no uncommitted changes.
The snapshot id can be obtained as the result of a commit operation, but, more probably, as the id of one of the SnapshotMetadata objects returned by ancestry()
This operation edits the repository history; it must be executed carefully. In particular, the current snapshot may end up being inaccessible from any other branches or tags.
Source code in icechunk/__init__.py
async_set_virtual_ref(key, location, *, offset, length)
async
Store a virtual reference to a chunk.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key | str | The chunk to store the reference under. This is the fully qualified zarr key eg: 'array/c/0/0/0' | required |
location | str | The location of the chunk in storage. This is absolute path to the chunk in storage eg: 's3://bucket/path/to/file.nc' | required |
offset | int | The offset in bytes from the start of the file location in storage the chunk starts at | required |
length | int | The length of the chunk in bytes, measured from the given offset | required |
Source code in icechunk/__init__.py
async_tag(tag_name, snapshot_id)
async
Create a tag pointing to the current checked out snapshot.
change_set_bytes()
Get the complete list of changes applied in this session, serialized to bytes.
This method is useful in combination with IcechunkStore.distributed_commit
. When a write session is too large to execute in a single machine, it could be useful to distribute it across multiple workers. Each worker can write their changes independently (map) and then a single commit is executed by a coordinator (reduce).
This methods provides a way to send back to gather a "description" of the changes applied by a worker. Resulting bytes, together with the change_set_bytes
of other workers, can be fed to distributed_commit
.
This API is subject to change, it will be replaced by a merge operation at the Store level.
Source code in icechunk/__init__.py
checkout(snapshot_id=None, branch=None, tag=None)
Checkout a branch, tag, or specific snapshot.
If a branch is checked out, any following commit
attempts will update that branch reference if successful. If a tag or snapshot_id are checked out, the repository won't allow commits.
Source code in icechunk/__init__.py
clear()
async
Clear the store.
This will remove all contents from the current session, including all groups and all arrays. But it will not modify the repository history.
commit(message)
Commit any uncommitted changes to the store.
This will create a new snapshot on the current branch and return the new snapshot id.
This method will fail if:
- there is no currently checked out branch
- some other writer updated the current branch since the repository was checked out
Source code in icechunk/__init__.py
create(storage, read_only=False, config=None, *args, **kwargs)
classmethod
Create a new IcechunkStore with the given storage configuration.
If a store already exists at the given location, an error will be raised.
Source code in icechunk/__init__.py
delete(key)
async
Remove a key from the store
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key | strz | | required |
exists(key)
async
Check if a key exists in the store.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key | str | | required |
Returns:
Type | Description |
---|---|
bool | |
get(key, prototype, byte_range=None)
async
Retrieve the value associated with a given key.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key | str | | required |
byte_range | tuple[int, Optional[int]] | | None |
Returns:
Type | Description |
---|---|
Buffer | |
Source code in icechunk/__init__.py
get_partial_values(prototype, key_ranges)
async
Retrieve possibly partial values from given key_ranges.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key_ranges | Iterable[tuple[str, tuple[int | None, int | None]]] | Ordered set of key, range pairs, a key may occur multiple times with different ranges | required |
Returns:
Type | Description |
---|---|
list of values, in the order of the key_ranges, may contain null/none for missing keys | |
Source code in icechunk/__init__.py
is_empty(prefix)
async
Check if the directory is empty.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prefix | str | Prefix of keys to check. | required |
Returns:
Type | Description |
---|---|
bool | True if the store is empty, False otherwise. |
Source code in icechunk/__init__.py
list()
Retrieve all keys in the store.
Returns:
Type | Description |
---|---|
AsyncIterator[str, None] | |
Source code in icechunk/__init__.py
list_dir(prefix)
Retrieve all keys and prefixes with a given prefix and which do not contain the character “/” after the given prefix.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prefix | str | | required |
Returns:
Type | Description |
---|---|
AsyncIterator[str, None] | |
Source code in icechunk/__init__.py
list_prefix(prefix)
Retrieve all keys in the store that begin with a given prefix. Keys are returned relative to the root of the store.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prefix | str | | required |
Returns:
Type | Description |
---|---|
AsyncIterator[str, None] | |
Source code in icechunk/__init__.py
merge(changes)
Merge the changes from another store into this store.
This will create a new snapshot on the current branch and return the new snapshot id.
This method will fail if:
- there is no currently checked out branch
- some other writer updated the current branch since the repository was checked out
The behavior is undefined if the stores applied conflicting changes.
Source code in icechunk/__init__.py
new_branch(branch_name)
Create a new branch pointing to the current checked out snapshot.
This requires having no uncommitted changes.
open(*args, **kwargs)
async
classmethod
This method is called by zarr-python, it's not intended for users.
Use one of IcechunkStore.open_existing
, IcechunkStore.create
or IcechunkStore.open_or_create
instead.
Source code in icechunk/__init__.py
open_existing(storage, read_only=False, config=None, *args, **kwargs)
classmethod
Open an existing IcechunkStore from the given storage.
If there is not store at the given location, an error will be raised.
It is recommended to use the cached storage option for better performance. If cached=True, this will be configured automatically with the provided storage_config as the underlying storage backend.
Source code in icechunk/__init__.py
preserve_read_only()
Context manager to allow unpickling this store preserving read_only
status. By default, stores are set to read-only after unpickling.
Source code in icechunk/__init__.py
reset()
Pop any uncommitted changes and reset to the previous snapshot state.
Returns:
Name | Type | Description |
---|---|---|
bytes | The changes that were taken from the working set | |
reset_branch(to_snapshot)
Reset the currently checked out branch to point to a different snapshot.
This requires having no uncommitted changes.
The snapshot id can be obtained as the result of a commit operation, but, more probably, as the id of one of the SnapshotMetadata objects returned by ancestry()
This operation edits the repository history, it must be executed carefully. In particular, the current snapshot may end up being inaccessible from any other branches or tags.
Source code in icechunk/__init__.py
set(key, value)
async
Store a (key, value) pair.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key | str | | required |
value | Buffer | | required |
set_if_not_exists(key, value)
async
Store a key to value
if the key is not already present.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key | str | | required |
value | Buffer | | required |
Source code in icechunk/__init__.py
set_partial_values(key_start_values)
async
Store values at a given key, starting at byte range_start.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key_start_values | list[tuple[str, int, BytesLike]] | set of key, range_start, values triples, a key may occur multiple times with different range_starts, range_starts (considering the length of the respective values) must not specify overlapping ranges for the same key | required |
Source code in icechunk/__init__.py
set_read_only()
set_virtual_ref(key, location, *, offset, length)
Store a virtual reference to a chunk.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key | str | The chunk to store the reference under. This is the fully qualified zarr key eg: 'array/c/0/0/0' | required |
location | str | The location of the chunk in storage. This is absolute path to the chunk in storage eg: 's3://bucket/path/to/file.nc' | required |
offset | int | The offset in bytes from the start of the file location in storage the chunk starts at | required |
length | int | The length of the chunk in bytes, measured from the given offset | required |
Source code in icechunk/__init__.py
set_writeable()
sync_clear()
Clear the store.
This will remove all contents from the current session, including all groups and all arrays. But it will not modify the repository history.
tag(tag_name, snapshot_id)
StorageConfig
Storage configuration for an IcechunkStore
Currently supports memory, filesystem, and S3 storage backends. Use the class methods to create a StorageConfig object with the desired backend.
Ex:
storage_config = StorageConfig.memory("prefix")
storage_config = StorageConfig.filesystem("/path/to/root")
storage_config = StorageConfig.s3_from_env("bucket", "prefix")
storage_config = StorageConfig.s3_from_config("bucket", "prefix", ...)
Source code in icechunk/_icechunk_python.pyi
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 |
|
filesystem(root)
classmethod
Create a StorageConfig object for a local filesystem storage backend with the given root directory
memory(prefix)
classmethod
s3_anonymous(bucket, prefix, endpoint_url, allow_http=False, region=None)
classmethod
Create a StorageConfig object for an S3 Object Storage compatible storage using anonymous access
Source code in icechunk/_icechunk_python.pyi
s3_from_config(bucket, prefix, credentials, endpoint_url, allow_http=False, region=None)
classmethod
Create a StorageConfig object for an S3 Object Storage compatible storage backend with the given bucket, prefix, and configuration
This method will directly use the provided credentials to authenticate with the S3 service, ignoring any environment variables.
Source code in icechunk/_icechunk_python.pyi
s3_from_env(bucket, prefix, endpoint_url, allow_http=False, region=None)
classmethod
Create a StorageConfig object for an S3 Object Storage compatible storage backend with the given bucket and prefix
This assumes that the necessary credentials are available in the environment: AWS_REGION AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN (optional) AWS_ENDPOINT_URL (optional) AWS_ALLOW_HTTP (optional)
Source code in icechunk/_icechunk_python.pyi
StoreConfig
Configuration for an IcechunkStore
Source code in icechunk/_icechunk_python.pyi
__init__(get_partial_values_concurrency=None, inline_chunk_threshold_bytes=None, unsafe_overwrite_refs=None, virtual_ref_config=None)
Create a StoreConfig object with the given configuration options
Parameters:
Name | Type | Description | Default |
---|---|---|---|
get_partial_values_concurrency | int | None | The number of concurrent requests to make when fetching partial values | None |
inline_chunk_threshold_bytes | int | None | The threshold at which to inline chunks in the store in bytes. When set, chunks smaller than this threshold will be inlined in the store. Default is 512 bytes when not specified. | None |
unsafe_overwrite_refs | bool | None | Whether to allow overwriting refs in the store. Default is False. Experimental. | None |
virtual_ref_config | VirtualRefConfig | None | Configurations for virtual references such as credentials and endpoints | None |
Returns:
Type | Description |
---|---|
StoreConfig | A StoreConfig object with the given configuration options |
Source code in icechunk/_icechunk_python.pyi
VirtualRefConfig
Source code in icechunk/_icechunk_python.pyi
S3
Config for an S3 Object Storage compatible storage backend
Source code in icechunk/_icechunk_python.pyi
s3_anonymous(*, endpoint_url=None, allow_http=None, region=None)
classmethod
Create a VirtualReferenceConfig object for an S3 Object Storage compatible storage using anonymous access
Source code in icechunk/_icechunk_python.pyi
s3_from_config(credentials, *, endpoint_url=None, allow_http=None, region=None)
classmethod
Create a VirtualReferenceConfig object for an S3 Object Storage compatible storage backend with the given bucket, prefix, and configuration
This method will directly use the provided credentials to authenticate with the S3 service, ignoring any environment variables.
Source code in icechunk/_icechunk_python.pyi
s3_from_env()
classmethod
Create a VirtualReferenceConfig object for an S3 Object Storage compatible storage backend with the given bucket and prefix
This assumes that the necessary credentials are available in the environment: AWS_REGION or AWS_DEFAULT_REGION AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN (optional) AWS_ENDPOINT_URL (optional) AWS_ALLOW_HTTP (optional)
Source code in icechunk/_icechunk_python.pyi
XarrayDatasetWriter
dataclass
Write Xarray Datasets to a group in an Icechunk store.
This class is private API. Please do not use it.
Source code in icechunk/xarray.py
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 |
|
write_eager()
Write in-memory variables to store.
Returns:
Type | Description |
---|---|
None | |
write_lazy(chunkmanager_store_kwargs=None, split_every=None)
Write lazy arrays (e.g. dask) to store.
Source code in icechunk/xarray.py
write_metadata(encoding=None)
This method creates new Zarr arrays when necessary, writes attributes, and any in-memory arrays.
Source code in icechunk/xarray.py
to_icechunk(dataset, store, *, group=None, mode=None, safe_chunks=True, append_dim=None, region=None, encoding=None, chunkmanager_store_kwargs=None, split_every=None, **kwargs)
Write an Xarray Dataset to a group of an icechunk store.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
store | (MutableMapping, str or path - like) | Store or path to directory in local or remote file system. | required |
mode | "w", "w-", "a", "a-", r+", None | Persistence mode: "w" means create (overwrite if exists); "w-" means create (fail if exists); "a" means override all existing variables including dimension coordinates (create if does not exist); "a-" means only append those variables that have | "w" |
group | str | Group path. (a.k.a. | None |
encoding | dict | Nested dictionary with variable names as keys and dictionaries of variable specific encodings as values, e.g., | None |
append_dim | hashable | If set, the dimension along which the data will be appended. All other dimensions on overridden variables must remain the same size. | None |
region | dict or auto | Optional mapping from dimension names to either a) If Alternatively integer slices can be provided; for example, Users are expected to ensure that the specified region aligns with Zarr chunk boundaries, and that dask chunks are also aligned. Xarray makes limited checks that these multiple chunk boundaries line up. It is possible to write incomplete chunks and corrupt the data with this option if you are not careful. | None |
safe_chunks | bool | If True, only allow writes to when there is a many-to-one relationship between Zarr chunks (specified in encoding) and Dask chunks. Set False to override this restriction; however, data may become corrupted if Zarr arrays are written in parallel. In addition to the many-to-one relationship validation, it also detects partial chunks writes when using the region parameter, these partial chunks are considered unsafe in the mode "r+" but safe in the mode "a". Note: Even with these validations it can still be unsafe to write two or more chunked arrays in the same location in parallel if they are not writing in independent regions. | True |
chunkmanager_store_kwargs | dict | Additional keyword arguments passed on to the | None |
split_every | int | None | Number of tasks to merge at every level of the tree reduction. | None |
Returns:
Type | Description |
---|---|
None | |
Notes
Two restrictions apply to the use of region
:
- If
region
is set, all variables in a dataset must have at least one dimension in common with the region. Other variables should be written in a separate single call toto_zarr()
. - Dimensions cannot be included in both
region
andappend_dim
at the same time. To create empty arrays to fill in withregion
, use theXarrayDatasetWriter
directly.
Source code in icechunk/xarray.py
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 |
|