API Reference
Home / icechunk-python / reference
BasicConflictSolver
Bases: ConflictSolver
A basic conflict solver that allows for simple configuration of resolution behavior
This conflict solver allows for simple configuration of resolution behavior for conflicts that may occur during a rebase operation. It will attempt to resolve a limited set of conflicts based on the configuration options provided.
- When a user attribute conflict is encountered, the behavior is determined by the
on_user_attributes_conflict
option - When a chunk conflict is encountered, the behavior is determined by the
on_chunk_conflict
option - When an array is deleted that has been updated,
fail_on_delete_of_updated_array
will determine whether to fail the rebase operation - When a group is deleted that has been updated,
fail_on_delete_of_updated_group
will determine whether to fail the rebase operation
Source code in icechunk/_icechunk_python.pyi
__init__(*, on_user_attributes_conflict=VersionSelection.UseOurs, on_chunk_conflict=VersionSelection.UseOurs, fail_on_delete_of_updated_array=False, fail_on_delete_of_updated_group=False)
Create a BasicConflictSolver object with the given configuration options Parameters: on_user_attributes_conflict: VersionSelection The behavior to use when a user attribute conflict is encountered, by default VersionSelection.use_ours() on_chunk_conflict: VersionSelection The behavior to use when a chunk conflict is encountered, by default VersionSelection.use_theirs() fail_on_delete_of_updated_array: bool Whether to fail when a chunk is deleted that has been updated, by default False fail_on_delete_of_updated_group: bool Whether to fail when a group is deleted that has been updated, by default False
Source code in icechunk/_icechunk_python.pyi
CachingConfig
Configuration for how Icechunk caches its metadata files
Source code in icechunk/_icechunk_python.pyi
CompressionAlgorithm
Bases: Enum
Enum for selecting the compression algorithm used by Icechunk to write its metadata files
Source code in icechunk/_icechunk_python.pyi
CompressionConfig
Configuration for how Icechunk compresses its metadata files
Source code in icechunk/_icechunk_python.pyi
Conflict
A conflict detected between snapshots
Source code in icechunk/_icechunk_python.pyi
conflict_type: ConflictType
property
The type of conflict detected
conflicted_chunks: list[list[int]] | None
property
If the conflict is a chunk conflict, this will return the list of chunk indices that are in conflict
path: str
property
The path of the node that caused the conflict
ConflictDetector
Bases: ConflictSolver
A conflict solver that can be used to detect conflicts between two stores, but does not resolve them
Where the BasicConflictSolver
will attempt to resolve conflicts, the ConflictDetector
will only detect them. This means that during a rebase operation the ConflictDetector
will raise a RebaseFailed
error if any conflicts are detected, and allow the rebase operation to be retried with a different conflict resolution strategy. Otherwise, if no conflicts are detected the rebase operation will succeed.
Source code in icechunk/_icechunk_python.pyi
ConflictError
Bases: Exception
Error raised when a commit operation fails due to a conflict.
Source code in icechunk/session.py
actual_parent: str
property
The actual parent snapshot ID of the branch that the session attempted to commit to.
When the session is based on a branch, this is the snapshot ID of the branch tip. If this error is raised, it means the branch was modified and committed by another session after the session was created.
expected_parent: str
property
The expected parent snapshot ID.
This is the snapshot ID that the session was based on when the commit operation was called.
ConflictErrorData
Data class for conflict errors. This describes the snapshot conflict detected when committing a session
If this error is raised, it means the branch was modified and committed by another session after the session was created.
Source code in icechunk/_icechunk_python.pyi
actual_parent: str
property
The actual parent snapshot ID of the branch that the session attempted to commit to.
When the session is based on a branch, this is the snapshot ID of the branch tip. If this error is raised, it means the branch was modified and committed by another session after the session was created.
expected_parent: str
property
The expected parent snapshot ID.
This is the snapshot ID that the session was based on when the commit operation was called.
ConflictSolver
An abstract conflict solver that can be used to detect or resolve conflicts between two stores
This should never be used directly, but should be subclassed to provide specific conflict resolution behavior
Source code in icechunk/_icechunk_python.pyi
ConflictType
Bases: Enum
Type of conflict detected
Source code in icechunk/_icechunk_python.pyi
IcechunkError
IcechunkStore
Bases: Store
, SyncMixin
Source code in icechunk/store.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 |
|
supports_listing: bool
property
Does the store support listing?
supports_partial_writes: bool
property
Does the store support partial writes?
supports_writes: bool
property
Does the store support writes?
__init__(store, *args, **kwargs)
Create a new IcechunkStore.
This should not be called directly, instead use the create
, open_existing
or open_or_create
class methods.
Source code in icechunk/store.py
clear()
async
Clear the store.
This will remove all contents from the current session, including all groups and all arrays. But it will not modify the repository history.
delete(key)
async
delete_dir(prefix)
async
exists(key)
async
Check if a key exists in the store.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key | str | | required |
Returns:
Type | Description |
---|---|
bool | |
get(key, prototype, byte_range=None)
async
Retrieve the value associated with a given key.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key | str | | required |
byte_range | ByteRequest | ByteRequest may be one of the following. If not provided, all data associated with the key is retrieved.
| None |
Returns:
Type | Description |
---|---|
Buffer | |
Source code in icechunk/store.py
get_partial_values(prototype, key_ranges)
async
Retrieve possibly partial values from given key_ranges.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key_ranges | Iterable[tuple[str, tuple[int | None, int | None]]] | Ordered set of key, range pairs, a key may occur multiple times with different ranges | required |
Returns:
Type | Description |
---|---|
list of values, in the order of the key_ranges, may contain null/none for missing keys | |
Source code in icechunk/store.py
is_empty(prefix)
async
Check if the directory is empty.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prefix | str | Prefix of keys to check. | required |
Returns:
Type | Description |
---|---|
bool | True if the store is empty, False otherwise. |
Source code in icechunk/store.py
list()
Retrieve all keys in the store.
Returns:
Type | Description |
---|---|
AsyncIterator[str, None] | |
Source code in icechunk/store.py
list_dir(prefix)
Retrieve all keys and prefixes with a given prefix and which do not contain the character “/” after the given prefix.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prefix | str | | required |
Returns:
Type | Description |
---|---|
AsyncIterator[str, None] | |
Source code in icechunk/store.py
list_prefix(prefix)
Retrieve all keys in the store that begin with a given prefix. Keys are returned relative to the root of the store.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prefix | str | | required |
Returns:
Type | Description |
---|---|
AsyncIterator[str, None] | |
Source code in icechunk/store.py
set(key, value)
async
Store a (key, value) pair.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key | str | | required |
value | Buffer | | required |
set_if_not_exists(key, value)
async
Store a key to value
if the key is not already present.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key | str | | required |
value | Buffer | | required |
Source code in icechunk/store.py
set_partial_values(key_start_values)
async
Store values at a given key, starting at byte range_start.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key_start_values | list[tuple[str, int, BytesLike]] | set of key, range_start, values triples, a key may occur multiple times with different range_starts, range_starts (considering the length of the respective values) must not specify overlapping ranges for the same key | required |
Source code in icechunk/store.py
set_virtual_ref(key, location, *, offset, length, checksum=None, validate_container=False)
Store a virtual reference to a chunk.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
key | str | The chunk to store the reference under. This is the fully qualified zarr key eg: 'array/c/0/0/0' | required |
location | str | The location of the chunk in storage. This is absolute path to the chunk in storage eg: 's3://bucket/path/to/file.nc' | required |
offset | int | The offset in bytes from the start of the file location in storage the chunk starts at | required |
length | int | The length of the chunk in bytes, measured from the given offset | required |
checksum | str | datetime | None | The etag or last_medified_at field of the object | None |
validate_container | bool | If set to true, fail for locations that don't match any existing virtual chunk container | False |
Source code in icechunk/store.py
sync_clear()
Clear the store.
This will remove all contents from the current session, including all groups and all arrays. But it will not modify the repository history.
RebaseFailedData
Data class for rebase failed errors. This describes the error that occurred when rebasing a session
Source code in icechunk/_icechunk_python.pyi
conflicts: list[Conflict]
property
The conflicts that occurred during the rebase operation
snapshot: str
property
The snapshot ID that the session was rebased to
RebaseFailedError
Bases: Exception
Error raised when a rebase operation fails.
Source code in icechunk/session.py
conflicts: list[Conflict]
property
List of conflicts that occurred during the rebase operation.
snapshot_id: str
property
The snapshot ID that the rebase operation failed on.
Repository
An Icechunk repository.
Source code in icechunk/repository.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 |
|
ancestry(*, branch=None, tag=None, snapshot=None)
Get the ancestry of a snapshot.
Args: branch: The branch to get the ancestry of. tag: The tag to get the ancestry of. snapshot: The snapshot ID to get the ancestry of.
Returns: list[SnapshotMetadata]: The ancestry of the snapshot, listing out the snapshots and their metadata
Only one of the arguments can be specified.
Source code in icechunk/repository.py
create(storage, config=None, virtual_chunk_credentials=None)
classmethod
Create a new Icechunk repository.
If one already exists at the given store location, an error will be raised.
Args: storage: The storage configuration for the repository. config: The repository configuration. If not provided, a default configuration will be used.
Source code in icechunk/repository.py
create_branch(branch, snapshot_id)
Create a new branch at the given snapshot.
Args: branch: The name of the branch to create. snapshot_id: The snapshot ID to create the branch at.
Source code in icechunk/repository.py
create_tag(tag, snapshot_id)
Create a new tag at the given snapshot.
Args: tag: The name of the tag to create. snapshot_id: The snapshot ID to create the tag at.
Source code in icechunk/repository.py
delete_branch(branch)
exists(storage)
staticmethod
Check if a repository exists at the given storage location.
Args: storage: The storage configuration for the repository.
fetch_config(storage)
staticmethod
list_branches()
list_tags()
lookup_branch(branch)
Get the tip snapshot ID of a branch.
Args: branch: The branch to get the tip of.
Returns: str: The snapshot ID of the tip of the branch
Source code in icechunk/repository.py
lookup_tag(tag)
Get the snapshot ID of a tag.
Args: tag: The tag to get the snapshot ID of.
Returns: str: The snapshot ID of the tag.
open(storage, config=None, virtual_chunk_credentials=None)
classmethod
Open an existing Icechunk repository.
If no repository exists at the given storage location, an error will be raised.
Args: storage: The storage configuration for the repository. config: The repository settings. If not provided, a default configuration will be loaded from the repository
Source code in icechunk/repository.py
open_or_create(storage, config=None, virtual_chunk_credentials=None)
classmethod
Open an existing Icechunk repository or create a new one if it does not exist.
Args: storage: The storage configuration for the repository. config: The repository settings. If not provided, a default configuration will be loaded from the repository
Source code in icechunk/repository.py
readonly_session(*, branch=None, tag=None, snapshot=None)
Create a read-only session.
This can be thought of as a read-only checkout of the repository at a given snapshot. When branch or tag are provided, the session will be based on the tip of the branch or the snapshot ID of the tag.
Args: branch: If provided, the branch to create the session on. tag: If provided, the tag to create the session on. snapshot: If provided, the snapshot ID to create the session on.
Returns: Session: The read-only session, pointing to the specified snapshot, tag, or branch.
Only one of the arguments can be specified.
Source code in icechunk/repository.py
reset_branch(branch, snapshot_id)
Reset a branch to a specific snapshot.
This will permanently alter the history of the branch such that the tip of the branch is the specified snapshot.
Args: branch: The branch to reset. snapshot_id: The snapshot ID to reset the branch to.
Source code in icechunk/repository.py
save_config()
Save the repository configuration to storage, this configuration will be used in future calls to Repository.open.
writable_session(branch)
Create a writable session on a branch
Like the read-only session, this can be thought of as a checkout of the repository at the tip of the branch. However, this session is writable and can be used to make changes to the repository. When ready, the changes can be committed to the branch, after which the session will become a read-only session on the new snapshot.
Args: branch: The branch to create the session on.
Returns: Session: The writable session on the branch.
Source code in icechunk/repository.py
RepositoryConfig
Configuration for an Icechunk repository
Source code in icechunk/_icechunk_python.pyi
Session
A session object that allows for reading and writing data from an Icechunk repository.
Source code in icechunk/session.py
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 |
|
branch: str | None
property
The branch that the session is based on. This is only set if the session is writable
has_uncommitted_changes: bool
property
Whether the session has uncommitted changes. This is only possibly true if the session is writable
read_only: bool
property
Whether the session is read-only.
snapshot_id: str
property
The base snapshot ID of the session
store: IcechunkStore
property
Get a zarr Store object for reading and writing data from the repository using zarr python
all_virtual_chunk_locations()
commit(message)
Commit the changes in the session to the repository
When successful, the writable session is completed and the session is now read-only and based on the new commit. The snapshot ID of the new commit is returned.
If the session is out of date, this will raise a ConflictError exception depicting the conflict that occurred. The session will need to be rebased before committing.
Args: message (str): The message to write with the commit
Returns: str: The snapshot ID of the new commit
Source code in icechunk/session.py
discard_changes()
merge(other)
rebase(solver)
Rebase the session to the latest ancestry of the branch.
This method will iteratively crawl the ancestry of the branch and apply the changes from the branch to the session. If a conflict is detected, the conflict solver will be used to optionally resolve the conflict. When complete, the session will be based on the latest commit of the branch and the session will be ready to attempt another commit.
When a conflict is detected and a resolution is not possible with the proivided solver, a RebaseFailed exception will be raised. This exception will contain the snapshot ID that the rebase failed on and a list of conflicts that occurred.
Args: solver (ConflictSolver): The conflict solver to use when a conflict is detected
Raises: RebaseFailed: When a conflict is detected and the solver fails to resolve it
Source code in icechunk/session.py
SnapshotMetadata
Metadata for a snapshot
Source code in icechunk/_icechunk_python.pyi
id: str
property
The snapshot ID
message: str
property
The commit message of the snapshot
written_at: datetime.datetime
property
The timestamp when the snapshot was written
Storage
Storage configuration for an IcechunkStore
Currently supports memory, filesystem, and S3 storage backends. Use the class methods to create a StorageConfig object with the desired backend.
Ex:
storage_config = StorageConfig.memory("prefix")
storage_config = StorageConfig.filesystem("/path/to/root")
storage_config = StorageConfig.object_store("s3://bucket/prefix", vec!["my", "options"])
storage_config = StorageConfig.s3_from_env("bucket", "prefix")
storage_config = StorageConfig.s3_from_config("bucket", "prefix", ...)
Source code in icechunk/_icechunk_python.pyi
StorageConcurrencySettings
Configuration for how Icechunk uses its Storage instance
Source code in icechunk/_icechunk_python.pyi
StorageSettings
Configuration for how Icechunk uses its Storage instance
Source code in icechunk/_icechunk_python.pyi
VersionSelection
azure_credentials(*, access_key=None, sas_token=None, bearer_token=None, from_env=None)
Create credentials Azure Blob Storage object store.
If all arguments are None, credentials are fetched from the operative system environment.
Source code in icechunk/credentials.py
azure_from_env_credentials()
Instruct Azure Blob Storage object store to fetch credentials from the operative system environment.
azure_static_credentials(*, access_key=None, sas_token=None, bearer_token=None)
Create static credentials Azure Blob Storage object store.
Source code in icechunk/credentials.py
azure_storage(*, container, prefix, access_key=None, sas_token=None, bearer_token=None, from_env=None, config=None)
Create a Storage instance that saves data in Azure Blob Storage object store.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
container | str | The container where the repository will store its data | required |
prefix | str | The prefix within the container that is the root directory of the repository | required |
access_key | str | None | Azure Blob Storage credential access key | None |
sas_token | str | None | Azure Blob Storage credential SAS token | None |
bearer_token | str | None | Azure Blob Storage credential bearer token | None |
from_env | bool | None | Fetch credentials from the operative system environment | None |
Source code in icechunk/storage.py
containers_credentials(m={}, **kwargs)
Build a map of credentials for virtual chunk containers.
Example usage:
import icechunk as ic
config = ic.RepositoryConfig.default()
config.inline_chunk_threshold_bytes = 512
virtual_store_config = ic.s3_store(
region="us-east-1",
endpoint_url="http://localhost:9000",
allow_http=True,
s3_compatible=True,
)
container = ic.VirtualChunkContainer("s3", "s3://", virtual_store_config)
config.set_virtual_chunk_container(container)
credentials = ic.containers_credentials(
s3=ic.s3_credentials(access_key_id="ACCESS_KEY", secret_access_key="SECRET")
)
repo = ic.Repository.create(
storage=ic.local_filesystem_storage(store_path),
config=config,
virtual_chunk_credentials=credentials,
)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
m | Mapping[str, AnyS3Credential] | A mapping from container name to credentials. | {} |
Source code in icechunk/credentials.py
gcs_credentials(*, service_account_file=None, service_account_key=None, application_credentials=None, from_env=None)
Create credentials Google Cloud Storage object store.
If all arguments are None, credentials are fetched from the operative system environment.
Source code in icechunk/credentials.py
gcs_from_env_credentials()
Instruct Google Cloud Storage object store to fetch credentials from the operative system environment.
gcs_static_credentials(*, service_account_file=None, service_account_key=None, application_credentials=None)
Create static credentials Google Cloud Storage object store.
Source code in icechunk/credentials.py
gcs_storage(*, bucket, prefix, service_account_file=None, service_account_key=None, application_credentials=None, from_env=None, config=None)
Create a Storage instance that saves data in Google Cloud Storage object store.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bucket | str | The bucket where the repository will store its data | required |
prefix | str | None | The prefix within the bucket that is the root directory of the repository | required |
from_env | bool | None | Fetch credentials from the operative system environment | None |
Source code in icechunk/storage.py
in_memory_storage()
Create a Storage instance that saves data in memory.
This Storage implementation is used for tests. Data will be lost after the process finishes, and can only be accesses through the Storage instance returned. Different instances don't share data.
Source code in icechunk/storage.py
local_filesystem_storage(path)
Create a Storage instance that saves data in the local file system.
This Storage instance is not recommended for production data
s3_anonymous_credentials()
s3_credentials(*, access_key_id=None, secret_access_key=None, session_token=None, expires_after=None, anonymous=None, from_env=None, get_credentials=None)
Create credentials for S3 and S3 compatible object stores.
If all arguments are None, credentials are fetched from the environment.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
access_key_id | str | None | S3 credential access key | None |
secret_access_key | str | None | S3 credential secret access key | None |
session_token | str | None | Optional S3 credential session token | None |
expires_after | datetime | None | Optional expiration for the object store credentials | None |
anonymous | bool | None | If set to True requests to the object store will not be signed | None |
from_env | bool | None | Fetch credentials from the operative system environment | None |
get_credentials | Callable[[], S3StaticCredentials] | None | Use this function to get and refresh object store credentials | None |
Source code in icechunk/credentials.py
95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 |
|
s3_from_env_credentials()
Instruct S3 and S3 compatible object stores to gather credentials from the operative system environment.
s3_refreshable_credentials(get_credentials)
Create refreshable credentials for S3 and S3 compatible object stores.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
get_credentials | Callable[[], S3StaticCredentials] | Use this function to get and refresh the credentials. The function must be pickable. | required |
Source code in icechunk/credentials.py
s3_static_credentials(*, access_key_id, secret_access_key, session_token=None, expires_after=None)
Create static credentials for S3 and S3 compatible object stores.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
access_key_id | str | S3 credential access key | required |
secret_access_key | str | S3 credential secret access key | required |
session_token | str | None | Optional S3 credential session token | None |
expires_after | datetime | None | Optional expiration for the object store credentials | None |
Source code in icechunk/credentials.py
s3_storage(*, bucket, prefix, region=None, endpoint_url=None, allow_http=False, access_key_id=None, secret_access_key=None, session_token=None, expires_after=None, anonymous=None, from_env=None, get_credentials=None)
Create a Storage instance that saves data in S3 or S3 compatible object stores.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bucket | str | The bucket where the repository will store its data | required |
prefix | str | None | The prefix within the bucket that is the root directory of the repository | required |
region | str | None | The region to use in the object store, if | None |
endpoint_url | str | None | Optional endpoint where the object store serves data, example: http://localhost:9000 | None |
allow_http | bool | If the object store can be accessed using http protocol instead of https | False |
access_key_id | str | None | S3 credential access key | None |
secret_access_key | str | None | S3 credential secret access key | None |
session_token | str | None | Optional S3 credential session token | None |
expires_after | datetime | None | Optional expiration for the object store credentials | None |
anonymous | bool | None | If set to True requests to the object store will not be signed | None |
from_env | bool | None | Fetch credentials from the operative system environment | None |
get_credentials | Callable[[], S3StaticCredentials] | None | Use this function to get and refresh object store credentials | None |
Source code in icechunk/storage.py
s3_store(region=None, endpoint_url=None, allow_http=False, anonymous=False, s3_compatible=False)
Build an ObjectStoreConfig instance for S3 or S3 compatible object stores.
Source code in icechunk/storage.py
XarrayDatasetWriter
dataclass
Write Xarray Datasets to a group in an Icechunk store.
This class is private API. Please do not use it.
Source code in icechunk/xarray.py
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 |
|
write_eager()
Write in-memory variables to store.
Returns:
Type | Description |
---|---|
None | |
write_lazy(chunkmanager_store_kwargs=None, split_every=None)
Write lazy arrays (e.g. dask) to store.
Source code in icechunk/xarray.py
write_metadata(encoding=None)
This method creates new Zarr arrays when necessary, writes attributes, and any in-memory arrays.
Source code in icechunk/xarray.py
to_icechunk(dataset, store, *, group=None, mode=None, safe_chunks=True, append_dim=None, region=None, encoding=None, chunkmanager_store_kwargs=None, split_every=None, **kwargs)
Write an Xarray Dataset to a group of an icechunk store.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
store | (MutableMapping, str or path - like) | Store or path to directory in local or remote file system. | required |
mode | "w", "w-", "a", "a-", r+", None | Persistence mode: "w" means create (overwrite if exists); "w-" means create (fail if exists); "a" means override all existing variables including dimension coordinates (create if does not exist); "a-" means only append those variables that have | "w" |
group | str | Group path. (a.k.a. | None |
encoding | dict | Nested dictionary with variable names as keys and dictionaries of variable specific encodings as values, e.g., | None |
append_dim | hashable | If set, the dimension along which the data will be appended. All other dimensions on overridden variables must remain the same size. | None |
region | dict or auto | Optional mapping from dimension names to either a) If Alternatively integer slices can be provided; for example, Users are expected to ensure that the specified region aligns with Zarr chunk boundaries, and that dask chunks are also aligned. Xarray makes limited checks that these multiple chunk boundaries line up. It is possible to write incomplete chunks and corrupt the data with this option if you are not careful. | None |
safe_chunks | bool | If True, only allow writes to when there is a many-to-one relationship between Zarr chunks (specified in encoding) and Dask chunks. Set False to override this restriction; however, data may become corrupted if Zarr arrays are written in parallel. In addition to the many-to-one relationship validation, it also detects partial chunks writes when using the region parameter, these partial chunks are considered unsafe in the mode "r+" but safe in the mode "a". Note: Even with these validations it can still be unsafe to write two or more chunked arrays in the same location in parallel if they are not writing in independent regions. | True |
chunkmanager_store_kwargs | dict | Additional keyword arguments passed on to the | None |
split_every | int | None | Number of tasks to merge at every level of the tree reduction. | None |
Returns:
Type | Description |
---|---|
None | |
Notes
Two restrictions apply to the use of region
:
- If
region
is set, all variables in a dataset must have at least one dimension in common with the region. Other variables should be written in a separate single call toto_zarr()
. - Dimensions cannot be included in both
region
andappend_dim
at the same time. To create empty arrays to fill in withregion
, use theXarrayDatasetWriter
directly.
Source code in icechunk/xarray.py
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 |
|