> For the complete documentation index, see [llms.txt](/llms.txt)

# Walrus SDK

Store and retrieve blobs on Walrus decentralized storage using the TypeScript SDK.



The Walrus TypeScript SDK works directly with Walrus storage nodes or with the Walrus upload relay.
When you use the Walrus SDK without an upload relay, reading and writing Walrus blobs requires many
requests (approximately 2200 to write a blob and approximately 335 to read a blob). The upload relay
reduces the number of requests needed to write a blob, but reads through the Walrus SDK still
require many requests. For many applications, using publishers and aggregators is recommended, but
the TypeScript SDK is useful when building applications where the application needs to directly
interact with Walrus, or when users need to pay for their own storage directly.

The Walrus SDK exposes high-level methods for reading and writing blobs, as well as lower-level
methods for the individual steps in the process. You can use the lower-level methods to implement
more complex flows when you want more control over optimization.

## Installation

```bash npm2yarn
npm install --save @mysten/walrus @mysten/sui
```

## Setup

To use the Walrus SDK, you need to create a client from the TypeScript SDK and extend it with the
Walrus SDK.

```ts
import { SuiGrpcClient } from '@mysten/sui/grpc';
import { walrus } from '@mysten/walrus';

const client = new SuiGrpcClient({
	network: 'testnet',
	baseUrl: 'https://fullnode.testnet.sui.io:443',
}).$extend(walrus());
```

The Walrus SDK includes all the relevant package and object IDs needed for connecting to Testnet.
You can also manually configure the Walrus SDK to use a different set of IDs, allowing you to
connect to a different network or updated deployment of the Walrus contracts.

```ts
import { SuiGrpcClient } from '@mysten/sui/grpc';
import { walrus } from '@mysten/walrus';

const client = new SuiGrpcClient({
	network: 'testnet',
	baseUrl: 'https://fullnode.testnet.sui.io:443',
}).$extend(
	walrus({
		packageConfig: {
			systemObjectId: '0x98ebc47370603fe81d9e15491b2f1443d619d1dab720d586e429ed233e1255c1',
			stakingPoolId: '0x20266a17b4f1a216727f3eef5772f8d486a9e3b5e319af80a5b75809c035561d',
		},
	}),
);
```

Some environments require you to customize how data is fetched:

```ts
import { SuiGrpcClient } from '@mysten/sui/grpc';
import { walrus } from '@mysten/walrus';

const client = new SuiGrpcClient({
	network: 'testnet',
	baseUrl: 'https://fullnode.testnet.sui.io:443',
}).$extend(
	walrus({
		storageNodeClientOptions: {
			fetch: (url, options) => {
				console.log('fetching', url);
				return fetch(url, options);
			},
			timeout: 60_000,
		},
	}),
);
```

You can use this approach to implement a fetch function with custom timeouts, rate limits, retry
logic, or any other desired behavior.

## `WalrusFile` API

The `WalrusFile` API provides a higher-level abstraction so that applications do not need to worry
about how data is stored in Walrus. It handles data stored directly in blobs and data stored in
quilts, but might expand to cover other storage patterns in the future.

### Reading files

To read files, use the `getFiles` method. This method accepts both blob IDs and quilt IDs, and
returns a `WalrusFile`.

Read files in batches when possible, which allows the client to be more efficient when loading
multiple files from the same quilt.

```ts
const [file1, file2] = await client.walrus.getFiles({ ids: [anyBlobId, orQuiltId] });
```

A `WalrusFile` works like a `Response` object from the `fetch` API:

```ts
// get contents as a Uint8Array
const bytes = await file1.bytes();
// Parse the contents as a `utf-8` string
const text = await file1.text();
// Parse the contents as JSON
const json = await file2.json();
```

A `WalrusFile` might also have `identifier` and `tags` properties if the file was stored in a quilt.

```ts
const identifier: string | null = await file1.getIdentifier();
const tags: Record<string, string> = await file1.getTags();
```

#### `WalrusBlob`

You can also get a `WalrusBlob` instead of a `WalrusFile` if you have the `blobId`:

```ts
const blob = await client.walrus.getBlob({ blobId });
```

If the blob is a quilt, you can read the files in the quilt:

```ts
// Get all files:
const files = await blob.files();
// Get files by identifier
const [readme] = await blob.files({ identifiers: ['README.md'] });
// Get files by tag
const textFiles = await blob.files({ tags: [{ 'content-type': 'text/plain' }] });
// Get files by quilt id
const filesById = await blob.files({ ids: [quiltID] });
```

### Writing files

You can also construct a `WalrusFile` from a `Uint8Array`, `Blob`, or a `string`, which you can then
store on Walrus:

```ts
const file1 = WalrusFile.from({
	contents: new Uint8Array([1, 2, 3]),
	identifier: 'file1.bin',
});
const file2 = WalrusFile.from({
	contents: new Blob([new Uint8Array([1, 2, 3])]),
	identifier: 'file2.bin',
});
const file3 = WalrusFile.from({
	contents: new TextEncoder().encode('Hello from the TS SDK!!!\n'),
	identifier: 'README.md',
	tags: {
		'content-type': 'text/plain',
	},
});
```

After you have your files, use the `writeFiles` method to write them to Walrus.

Along with the files, you also need to provide a `Signer` instance that signs and pays for the
transaction and storage fees. The signer's address needs sufficient SUI to cover the transactions
that register the blob and certify its availability after upload. The signer must own sufficient WAL
to pay to store the blob for the specified number of epochs, as well as the write fee for writing
the blob.

The exact costs depend on the size of the blobs, as well as the current gas and storage prices.

```ts
const results: {
	id: string;
	blobId: string;
	blobObject: Blob.$inferType;
}[] = await client.walrus.writeFiles({
	files: [file1, file2, file3],
	epochs: 3,
	deletable: true,
	signer: keypair,
});
```

The provided files are all written into a single quilt. Future versions of the SDK might optimize
how files are stored to be more efficient by splitting files into multiple quilts.

The current quilt encoding is less efficient for single files, so writing multiple files together is
recommended when possible. Writing raw blobs directly is also possible using the `writeBlob` API
described below.

#### Writing files in browser environments

When the transactions to upload a blob are signed by a wallet in a browser, some wallets use popups
to prompt you for a signature. If the popups are not opened in direct response to a user
interaction, the browser might block them.

To avoid this, execute the transactions that register and certify the blob in separate event
handlers by creating separate buttons for each step.

The `client.writeFilesFlow` method returns an object with a set of methods that break the write flow
into several smaller steps:

1. `encode`: Encodes the files and generates a `blobId`.
2. `register`: Returns a transaction that registers the blob onchain.
3. `upload`: Uploads the data to storage nodes.
4. `certify`: Returns a transaction that certifies the blob onchain.
5. `listFiles`: Returns a list of the created files.

The following simplified example shows the core API usage with separate user interactions:

```tsx
// Step 1: Create and encode the flow (can be done immediately when file is selected)
const flow = client.walrus.writeFilesFlow({
	files: [
		WalrusFile.from({
			contents: new Uint8Array(fileData),
			identifier: 'my-file.txt',
		}),
	],
});

await flow.encode();

// Step 2: Register the blob (triggered by user clicking a register button after the encode step)
async function handleRegister() {
	const registerTx = flow.register({
		epochs: 3,
		owner: currentAccount.address,
		deletable: true,
	});
	const result = await signAndExecuteTransaction({ transaction: registerTx });

	// Check transaction status
	if (result.$kind === 'FailedTransaction') {
		throw new Error(`Registration failed: ${result.FailedTransaction.status.error?.message}`);
	}

	// Step 3: Upload the data to storage nodes
	// This can be done immediately after the register step, or as a separate step the user initiates
	await flow.upload({ digest: result.Transaction.digest });
}

// Step 4: Certify the blob (triggered by user clicking a certify button after the blob is uploaded)
async function handleCertify() {
	const certifyTx = flow.certify();

	const result = await signAndExecuteTransaction({ transaction: certifyTx });

	// Check transaction status
	if (result.$kind === 'FailedTransaction') {
		throw new Error(`Certification failed: ${result.FailedTransaction.status.error?.message}`);
	}

	// Step 5: Get the new files
	const files = await flow.listFiles();
	console.log('Uploaded files', files);
}
```

This approach ensures that each transaction signing step is separated into different user
interactions, allowing wallet popups to work properly without being blocked by the browser.

### Running the full flow

If you do not need separate user interactions for each step, use `run()` to execute the full
pipeline as an async iterator.

```ts
const flow = client.walrus.writeBlobFlow({ blob });

for await (const step of flow.run({ signer, epochs: 3, deletable: true })) {
	await db.save(fileId, step); // persist for crash recovery
}
```

The flow also provides `executeRegister` and `executeCertify` methods that handle signing and return
typed step results:

```ts
const flow = client.walrus.writeBlobFlow({ blob });
const enc = await flow.encode();
const reg = await flow.executeRegister({ signer, epochs: 3, deletable: true, owner: address });
const up = await flow.upload({ digest: reg.txDigest });
const cert = await flow.executeCertify({ signer });
```

#### Resuming uploads

Each step executed by `run()` produces a `WriteBlobStep` that you can persist for crash recovery. To
resume an upload after a crash, pass a saved `WriteBlobStep` as `resume`. The flow skips completed
steps, validates the `blobId`, and only uploads slivers that are not already stored:

```ts
const saved = await db.load(fileId);
const flow = client.walrus.writeBlobFlow({ blob, resume: saved });

for await (const step of flow.run({ signer, epochs: 3, deletable: true })) {
	await db.save(fileId, step);
}
```

## Using an upload relay

Writing blobs directly from a client requires many requests to write data to all the storage nodes.
An upload relay offloads the work of these writes to a server, reducing complexity for the client.

To use an upload relay, add the `uploadRelay` option when adding the Walrus extension:

```ts
const client = new SuiGrpcClient({
	network: 'testnet',
	baseUrl: 'https://fullnode.testnet.sui.io:443',
}).$extend(
	walrus({
		uploadRelay: {
			host: 'https://upload-relay.testnet.walrus.space',
			sendTip: {
				max: 1_000,
			},
		},
	}),
);
```

The `host` option is required and indicates the URL for your upload relay. Upload relays might
require a tip to cover the cost of writing the blob. You can configure a maximum tip (paid in MIST)
and the `WalrusClient` automatically determines the required tip for your upload relay. You can also
manually configure the tip as shown below.

Find the tip required by an upload relay using the `tip-config` endpoint (for example,
`https://upload-relay.testnet.walrus.space/v1/tip-config`). The tip is either a `const` or a
`linear` type.

### `const` tip

A `const` tip sends a fixed amount for each blob written to the upload relay.

```ts
const client = new SuiGrpcClient({
	network: 'testnet',
	baseUrl: 'https://fullnode.testnet.sui.io:443',
}).$extend(
	walrus({
		uploadRelay: {
			host: 'https://upload-relay.testnet.walrus.space',
			sendTip: {
				address: '0x123...',
				kind: {
					const: 105,
				},
			},
		},
	}),
);
```

### `linear` tip

A `linear` tip sends a fixed amount for each blob written to the upload relay, plus a multiplier
based on the size of the blob.

```ts
const client = new SuiGrpcClient({
	network: 'testnet',
	baseUrl: 'https://fullnode.testnet.sui.io:443',
}).$extend(
	walrus({
		uploadRelay: {
			host: 'https://upload-relay.testnet.walrus.space',
			sendTip: {
				address: '0x123...',
				kind: {
					linear: {
						base: 105,
						perEncodedKib: 10,
					},
				},
			},
		},
	}),
);
```

## Interacting with blobs directly

If you do not want to use the `WalrusFile` abstractions, use the `readBlob` and `writeBlob` APIs
directly.

### Reading blobs

The `readBlob` method reads a blob given the `blobId` and returns a `Uint8Array` containing the blob
content:

```ts
const blob = await client.walrus.readBlob({ blobId });
```

### Writing blobs

The `writeBlob` method writes a blob (as a `Uint8Array`) to Walrus. You need to specify how long the
blob should be stored for and whether the blob should be deletable.

```ts
const file = new TextEncoder().encode('Hello from the TS SDK!!!\n');

const { blobId } = await client.walrus.writeBlob({
	blob: file,
	deletable: false,
	epochs: 3,
	signer: keypair,
});
```

`writeBlob` and `writeFiles` also support `onStep` and `resume` for crash-recoverable uploads:

```ts
const { blobId } = await client.walrus.writeBlob({
	blob: file,
	deletable: true,
	epochs: 3,
	signer: keypair,
	onStep: (step) => db.save(fileId, step),
	resume: await db.load(fileId), // pass a previously saved step to resume
});
```

## Error handling

The SDK exports all the error classes for different types of errors that can be thrown. Walrus is a
fault-tolerant distributed system where many types of errors can be recovered from. During epoch
changes, the data cached in the `WalrusClient` can become invalid. Errors that result from this
situation extend the `RetryableWalrusClientError` class.

You can check for these errors and reset the client before retrying:

```ts
import { RetryableWalrusClientError } from '@mysten/walrus';

if (error instanceof RetryableWalrusClientError) {
	client.walrus.reset();

	/* retry your operation */
}
```

`RetryableWalrusClientError` errors are not guaranteed to succeed after resetting the client and
retrying, but this pattern handles some edge cases.

High-level methods like `readBlob` already handle various error cases and automatically retry when
encountering these errors, as well as handling cases where only a subset of nodes need to respond
successfully to read or publish a blob.

When using the lower-level methods to build your own read or publish flows, understand the number of
shards and slivers that need to be successfully written or read for your operation to succeed.
Gracefully handle cases where some nodes might be in a bad state.

### Network errors

Walrus handles some nodes being down, and the SDK only throws errors when it cannot read from or
write to enough storage nodes. When troubleshooting problems, figuring out what went wrong can be
challenging because you do not see all the individual network errors.

Pass an `onError` option in the `storageNodeClientOptions` to get the individual errors from failed
requests:

```ts
const client = new SuiGrpcClient({
	network: 'testnet',
	baseUrl: 'https://fullnode.testnet.sui.io:443',
}).$extend(
	walrus({
		storageNodeClientOptions: {
			onError: (error) => console.log(error),
		},
	}),
);
```

## Configuring network requests

Reading and writing blobs directly from storage nodes requires many requests. The Walrus SDK issues
all requests needed to complete these operations, but does not handle all the complexities a robust
aggregator or publisher might encounter.

By default, all requests use the global `fetch` for whatever runtime the SDK runs in.

This does not impose any limitations on concurrency, and is subject to default timeouts and behavior
defined by your runtime. To customize how requests are made, provide a custom `fetch` method:

```ts
import type { RequestInfo, RequestInit } from 'undici';
import { Agent, fetch } from 'undici';

const client = new SuiGrpcClient({
	network: 'testnet',
	baseUrl: 'https://fullnode.testnet.sui.io:443',
}).$extend(
	walrus({
		storageNodeClientOptions: {
			timeout: 60_000,
			fetch: (url, init) => {
				// Some casting may be required because undici types may not exactly match the @node/types types
				return fetch(url as RequestInfo, {
					...(init as RequestInit),
					dispatcher: new Agent({
						connectTimeout: 60_000,
					}),
				}) as unknown as Promise<Response>;
			},
		},
	}),
);
```

## Loading the WASM module in Vite or client-side apps

The Walrus SDK requires WASM bindings to encode and decode blobs. When running in Node.js or Bun,
and with some bundlers, this works without any additional configuration.

In some cases, you might need to manually specify where the SDK loads the WASM bindings from.

In Vite, get the URL for the WASM bindings by importing the WASM file with a `?url` suffix, then
pass it into the Walrus client:

```ts
import walrusWasmUrl from '@mysten/walrus-wasm/web/walrus_wasm_bg.wasm?url';

const client = new SuiGrpcClient({
	network: 'testnet',
	baseUrl: 'https://fullnode.testnet.sui.io:443',
}).$extend(
	walrus({
		wasmUrl: walrusWasmUrl,
	}),
);
```

If you are unable to get a URL for the WASM file in your bundler or build system, you can self-host
the WASM bindings or load them from a CDN:

```ts
const client = new SuiGrpcClient({
	network: 'testnet',
	baseUrl: 'https://fullnode.testnet.sui.io:443',
}).$extend(
	walrus({
		wasmUrl: 'https://unpkg.com/@mysten/walrus-wasm@latest/web/walrus_wasm_bg.wasm',
	}),
);
```

In Next.js, when using Walrus in API routes, you might need to tell Next.js to skip bundling for the
Walrus packages:

```ts
// next.config.ts
const nextConfig: NextConfig = {
	serverExternalPackages: ['@mysten/walrus', '@mysten/walrus-wasm'],
};
```

## Known fetch limitations

* Some nodes can be slow to respond. When running in Node.js, the default `connectTimeout` is 10
  seconds and can cause request timeouts.
* In Bun, the `abort` signal stops requests from responding, but the requests still wait for
  completion before their promises reject.

## Full API

For a complete overview of the available methods on the `WalrusClient`, reference the
[TypeDocs](/typedoc/classes/_mysten_walrus.WalrusClient.html).

## Examples

There are a number of
[examples you can reference](https://github.com/MystenLabs/ts-sdks/tree/main/packages/walrus/examples)
in the `ts-sdks` repository that show things like building aggregators and publishers with the
Walrus SDK.
