The go-ipfs config file is a JSON document located at $IPFS_PATH/config
. It
is read once at node instantiation, either for an offline command, or when
starting the daemon. Commands that execute on a running daemon do not read the
config file at runtime.
Configuration profiles allow to tweak configuration quickly. Profiles can be
applied with --profile
flag to ipfs init
or with the ipfs config profile apply
command. When a profile is applied a backup of the configuration file
will be created in $IPFS_PATH
.
The available configuration profiles are listed below. You can also find them
documented in ipfs config profile --help
.
-
server
Disables local host discovery, recommended when running IPFS on machines with public IPv4 addresses.
-
randomports
Use a random port number for swarm.
-
default-datastore
Configures the node to use the default datastore (flatfs).
Read the "flatfs" profile description for more information on this datastore.
This profile may only be applied when first initializing the node.
-
local-discovery
Sets default values to fields affected by the server profile, enables discovery in local networks.
-
test
Reduces external interference of IPFS daemon, this is useful when using the daemon in test environments.
-
default-networking
Restores default network settings. Inverse profile of the test profile.
-
flatfs
Configures the node to use the flatfs datastore.
This is the most battle-tested and reliable datastore, but it's significantly slower than the badger datastore. You should use this datastore if:
- You need a very simple and very reliable datastore you and trust your filesystem. This datastore stores each block as a separate file in the underlying filesystem so it's unlikely to loose data unless there's an issue with the underlying file system.
- You need to run garbage collection on a small (<= 10GiB) datastore. The default datastore, badger, can leave several gigabytes of data behind when garbage collecting.
- You're concerned about memory usage. In its default configuration, badger can use up to several gigabytes of memory.
This profile may only be applied when first initializing the node.
-
badgerds
Configures the node to use the badger datastore.
This is the fastest datastore. Use this datastore if performance, especially when adding many gigabytes of files, is critical. However:
- This datastore will not properly reclaim space when your datastore is smaller than several gigabytes. If you run IPFS with '--enable-gc' (you have enabled block-level garbage collection), you plan on storing very little data in your IPFS node, and disk usage is more critical than performance, consider using flatfs.
- This datastore uses up to several gigabytes of memory.
This profile may only be applied when first initializing the node.
-
lowpower
Reduces daemon overhead on the system. May affect node functionality - performance of content discovery and data fetching may be degraded.
Addresses
API
AutoNAT
Bootstrap
Datastore
Discovery
Gateway
Identity
Ipns
Mounts
Pubsub
Reprovider
Routing
Swarm
Contains information about various listener addresses to be used by this node.
Multiaddr or array of multiaddrs describing the address to serve the local HTTP API on.
Supported Transports:
- tcp/ip{4,6} -
/ipN/.../tcp/...
- unix -
/unix/path/to/socket
Default: /ip4/127.0.0.1/tcp/5001
Multiaddr or array of multiaddrs describing the address to serve the local gateway on.
Supported Transports:
- tcp/ip{4,6} -
/ipN/.../tcp/...
- unix -
/unix/path/to/socket
Default: /ip4/127.0.0.1/tcp/8080
Array of multiaddrs describing which addresses to listen on for p2p swarm connections.
Supported Transports:
- tcp/ip{4,6} -
/ipN/.../tcp/...
- websocket -
/ipN/.../tcp/.../ws
- quic -
/ipN/.../udp/.../quic
Default:
[
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001",
"/ip6/0.0.0.0/udp/4001/quic",
"/ip6/::/udp/4001/quic"
]
If non-empty, this array specifies the swarm addresses to announce to the network. If empty, the daemon will announce inferred swarm addresses.
Default: []
Array of swarm addresses not to announce to the network.
Default: []
Contains information used by the API gateway.
Map of HTTP headers to set on responses from the API HTTP server.
Example:
{
"Foo": ["bar"]
}
Default: null
Contains the configuration options for the AutoNAT service. The AutoNAT service helps other nodes on the network determine if they're publicly reachable from the rest of the internet.
When unset (default), the AutoNAT service defaults to enabled. Otherwise, this field can take one of two values:
- "enabled" - Enable the service (unless the node determines that it, itself, isn't reachable by the public internet).
- "disabled" - Disable the service.
Additional modes may be added in the future.
When set, this option configure's the AutoNAT services throttling behavior. By default, go-ipfs will rate-limit the number of NAT checks performed for other nodes to 30 per minute, and 3 per peer.
Configures how many AutoNAT requests to service per AutoNAT.Throttle.Interval
.
Default: 30
Configures how many AutoNAT requests per-peer to service per AutoNAT.Throttle.Interval
.
Default: 3
Configures the interval for the above limits.
Default: 1 Minute
Bootstrap is an array of multiaddrs of trusted nodes to connect to in order to initiate a connection to the network.
Default: The ipfs.io bootstrap nodes
Contains information related to the construction and operation of the on-disk storage system.
A soft upper limit for the size of the ipfs repository's datastore. With StorageGCWatermark
,
is used to calculate whether to trigger a gc run (only if --enable-gc
flag is set).
Default: 10GB
The percentage of the StorageMax
value at which a garbage collection will be
triggered automatically if the daemon was run with automatic gc enabled (that
option defaults to false currently).
Default: 90
A time duration specifying how frequently to run a garbage collection. Only used if automatic gc is enabled.
Default: 1h
A boolean value. If set to true, all block reads from disk will be hashed and verified. This will cause increased CPU utilization.
Default: false
A number representing the size in bytes of the blockstore's bloom filter. A value of zero represents the feature being disabled.
This site generates useful graphs for various bloom filter values:
https://hur.st/bloomfilter/?n=1e6&p=0.01&m=&k=7 You may use it to find a
preferred optimal value, where m
is BloomFilterSize
in bits. Remember to
convert the value m
from bits, into bytes for use as BloomFilterSize
in the
config file. For example, for 1,000,000 blocks, expecting a 1% false positive
rate, you'd end up with a filter size of 9592955 bits, so for BloomFilterSize
we'd want to use 1199120 bytes. As of writing, 7 hash
functions
are used, so the constant k
is 7 in the formula.
Default: 0
Spec defines the structure of the ipfs datastore. It is a composable structure, where each datastore is represented by a json object. Datastores can wrap other datastores to provide extra functionality (eg metrics, logging, or caching).
This can be changed manually, however, if you make any changes that require a different on-disk structure, you will need to run the ipfs-ds-convert tool to migrate data into the new structures.
For more information on possible values for this configuration option, see docs/datastores.md
Default:
{
"mounts": [
{
"child": {
"path": "blocks",
"shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
"sync": true,
"type": "flatfs"
},
"mountpoint": "/blocks",
"prefix": "flatfs.datastore",
"type": "measure"
},
{
"child": {
"compression": "none",
"path": "datastore",
"type": "levelds"
},
"mountpoint": "/",
"prefix": "leveldb.datastore",
"type": "measure"
}
],
"type": "mount"
}
Contains options for configuring ipfs node discovery mechanisms.
Options for multicast dns peer discovery.
A boolean value for whether or not mdns should be active.
Default: true
A number of seconds to wait between discovery checks.
Options for the HTTP gateway.
When set to true, the gateway will only serve content already in the local repo and will not fetch files from the network.
Default: false
A boolean to configure whether DNSLink lookup for value in Host
HTTP header
should be performed. If DNSLink is present, content path stored in the DNS TXT
record becomes the /
and respective payload is returned to the client.
Default: false
Headers to set on gateway responses.
Default:
{
"Access-Control-Allow-Headers": [
"X-Requested-With"
],
"Access-Control-Allow-Methods": [
"GET"
],
"Access-Control-Allow-Origin": [
"*"
]
}
A url to redirect requests for /
to.
Default: ""
A boolean to configure whether the gateway is writeable or not.
Default: false
Array of acceptable url paths that a client can specify in X-Ipfs-Path-Prefix header.
The X-Ipfs-Path-Prefix header is used to specify a base path to prepend to links in directory listings and for trailing-slash redirects. It is intended to be set by a frontend http proxy like nginx.
Example: We mount blog.ipfs.io
(a dnslink page) at ipfs.io/blog
.
.ipfs/config
"Gateway": {
"PathPrefixes": ["/blog"],
}
nginx_ipfs.conf
location /blog/ {
rewrite "^/blog(/.*)$" $1 break;
proxy_set_header Host blog.ipfs.io;
proxy_set_header X-Ipfs-Gateway-Prefix /blog;
proxy_pass http://127.0.0.1:8080;
}
Default: []
PublicGateways
is a dictionary for defining gateway behavior on specified hostnames.
Array of paths that should be exposed on the hostname.
Example:
{
"Gateway": {
"PublicGateways": {
"example.com": {
"Paths": ["/ipfs", "/ipns"],
}
}
}
}
Above enables http://example.com/ipfs/*
and http://example.com/ipns/*
but not http://example.com/api/*
Default: []
A boolean to configure whether the gateway at the hostname provides Origin isolation between content roots.
-
true
- enables subdomain gateway athttp://*.{hostname}/
- Requires whitelist: make sure respective
Paths
are set. For example,Paths: ["/ipfs", "/ipns"]
are required forhttp://{cid}.ipfs.{hostname}
andhttp://{foo}.ipns.{hostname}
to work:"Gateway": { "PublicGateways": { "dweb.link": { "UseSubdomains": true, "Paths": ["/ipfs", "/ipns"], } } }
- Backward-compatible: requests for content paths such as
http://{hostname}/ipfs/{cid}
produce redirect tohttp://{cid}.ipfs.{hostname}
- API: if
/api
is on thePaths
whitelist,http://{hostname}/api/{cmd}
produces redirect tohttp://api.{hostname}/api/{cmd}
- Requires whitelist: make sure respective
-
false
- enables path gateway athttp://{hostname}/*
- Example:
"Gateway": { "PublicGateways": { "ipfs.io": { "UseSubdomains": false, "Paths": ["/ipfs", "/ipns", "/api"], } } }
- Example:
Default: false
A boolean to configure whether DNSLink for hostname present in Host
HTTP header should be resolved. Overrides global setting.
If Paths
are defined, they take priority over DNSLink.
Default: false
(DNSLink lookup enabled by default for every defined hostname)
Default entries for localhost
hostname and loopback IPs are always present.
If additional config is provided for those hostnames, it will be merged on top of implicit values:
{
"Gateway": {
"PublicGateways": {
"localhost": {
"Paths": ["/ipfs", "/ipns"],
"UseSubdomains": true
}
}
}
}
It is also possible to remove a default by setting it to null
.
For example, to disable subdomain gateway on localhost
and make that hostname act the same as 127.0.0.1
:
$ ipfs config --json Gateway.PublicGateways '{"localhost": null }'
Below is a list of the most common public gateway setups.
-
Public subdomain gateway at
http://{cid}.ipfs.dweb.link
(each content root gets its own Origin)$ ipfs config --json Gateway.PublicGateways '{ "dweb.link": { "UseSubdomains": true, "Paths": ["/ipfs", "/ipns"] } }'
Note I: this enables automatic redirects from content paths to subdomains:
http://dweb.link/ipfs/{cid}
→http://{cid}.ipfs.dweb.link
Note II: if you run go-ipfs behind a reverse proxy that provides TLS, make it adds aX-Forwarded-Proto: https
HTTP header to ensure users are redirected tohttps://
, nothttp://
. The NGINX directive isproxy_set_header X-Forwarded-Proto "https";
.:
https://dweb.link/ipfs/{cid}
→https://{cid}.ipfs.dweb.link
-
Public path gateway at
http://ipfs.io/ipfs/{cid}
(no Origin separation)$ ipfs config --json Gateway.PublicGateways '{ "ipfs.io": { "UseSubdomains": false, "Paths": ["/ipfs", "/ipns", "/api"] } }'
-
Public DNSLink gateway resolving every hostname passed in
Host
header.$ ipfs config --json Gateway.NoDNSLink true
- Note that
NoDNSLink: false
is the default (it works out of the box unless set totrue
manually)
- Note that
-
Hardened, site-specific DNSLink gateway.
Disable fetching of remote data (NoFetch: true
) and resolving DNSLink at unknown hostnames (NoDNSLink: true
). Then, enable DNSLink gateway only for the specific hostname (for which data is already present on the node), without exposing any content-addressingPaths
: "NoFetch": true, "NoDNSLink": true,$ ipfs config --json Gateway.NoFetch true $ ipfs config --json Gateway.NoDNSLink true $ ipfs config --json Gateway.PublicGateways '{ "en.wikipedia-on-ipfs.org": { "NoDNSLink": false, "Paths": [] } }'
The unique PKI identity label for this configs peer. Set on init and never read, it's merely here for convenience. Ipfs will always generate the peerID from its keypair at runtime.
The base64 encoded protobuf describing (and containing) the nodes private key.
A time duration specifying how frequently to republish ipns records to ensure they stay fresh on the network. If unset, we default to 4 hours.
A time duration specifying the value to set on ipns records for their validity lifetime.
If unset, we default to 24 hours.
The number of entries to store in an LRU cache of resolved ipns entries. Entries will be kept cached until their lifetime is expired.
Default: 128
FUSE mount point configuration options.
Mountpoint for /ipfs/
.
Mountpoint for /ipns/
.
Sets the FUSE allow other option on the mountpoint.
Pubsub configures the ipfs pubsub
subsystem. To use, it must be enabled by
passing the --enable-pubsub-experiment
flag to the daemon.
Sets the default router used by pubsub to route messages to peers. This can be one of:
"floodsub"
- floodsub is a basic router that simply floods messages to all connected peers. This router is extremely inefficient but very reliable."gossipsub"
- gossipsub is a more advanced routing algorithm that will build an overlay mesh from a subset of the links in the network.
Default: "gossipsub"
Disables message signing and signature verification. Enable this option if you're operating in a completely trusted network.
It is not safe to disable signing even if you don't care who sent the message because spoofed messages can be used to silence real messages by intentionally re-using the real message's message ID.
Default: false
Configures the peering subsystem. The peering subsystem configures go-ipfs to connect to, remain connected to, and reconnect to a set of nodes. Nodes should use this subsystem to create "sticky" links between frequently useful peers to improve reliability.
Use-cases:
- An IPFS gateway connected to an IPFS cluster should peer to ensure that the gateway can always fetch content from the cluster.
- A dapp may peer embedded go-ipfs nodes with a set of pinning services or textile cafes/hubs.
- A set of friends may peer to ensure that they can always fetch each other's content.
When a node is added to the set of peered nodes, go-ipfs will:
- Protect connections to this node from the connection manager. That is, go-ipfs will never automatically close the connection to this node and connections to this node will not count towards the connection limit.
- Connect to this node on startup.
- Repeatedly try to reconnect to this node if the last connection dies or the node goes offline. This repeated re-connect logic is governed by a randomized exponential backoff delay ranging from ~5 seconds to ~10 minutes to avoid repeatedly reconnect to a node that's offline.
Peering can be asymmetric or symmetric:
- When symmetric, the connection will be protected by both nodes and will likely be vary stable.
- When asymmetric, only one node (the node that configured peering) will protect the connection and attempt to re-connect to the peered node on disconnect. If the peered node is under heavy load and/or has a low connection limit, the connection may flap repeatedly. Be careful when asymmetrically peering to not overload peers.
The set of peers with which to peer. Each entry is of the form:
{
"ID": "QmSomePeerID", # The peers ID.
"Addrs": ["/ip4/1.2.3.4/tcp/1234"] # Known addresses for the peer. If none are specified, the DHT will be queried.
}
Additional fields may be added in the future.
Sets the time between rounds of reproviding local content to the routing
system. If unset, it defaults to 12 hours. If set to the value "0"
it will
disable content reproviding.
Note: disabling content reproviding will result in other nodes on the network not being able to discover that you have the objects that you have. If you want to have this disabled and keep the network aware of what you have, you must manually announce your content periodically.
Tells reprovider what should be announced. Valid strategies are:
- "all" (default) - announce all stored data
- "pinned" - only announce pinned data
- "roots" - only announce directly pinned keys and root keys of recursive pins
Contains options for content, peer, and IPNS routing mechanisms.
Content routing mode. Can be overridden with daemon --routing
flag.
There are two core routing options: "none" and "dht" (default).
- If set to "none", your node will use no routing system. You'll have to explicitly connect to peers that have the content you're looking for.
- If set to "dht" (or "dhtclient"/"dhtserver"), your node will use the IPFS DHT.
When the DHT is enabled, it can operate in two modes: client and server.
- In server mode, your node will query other peers for DHT records, and will respond to requests from other peers (both requests to store records and requests to retrieve records).
- In client mode, your node will query the DHT as a client but will not respond to requests from other peers. This mode is less resource intensive than server mode.
When Routing.Type
is set to dht
, your node will start as a DHT client, and
switch to a DHT server when and if it determines that it's reachable from the
public internet (e.g., it's not behind a firewall).
To force a specific DHT mode, client or server, set Routing.Type
to
dhtclient
or dhtserver
respectively. Please do not set this to dhtserver
unless you're sure your node is reachable from the public network.
Example:
{
"Routing": {
"Type": "dhtclient"
}
}
Options for configuring the swarm.
An array of addresses (multiaddr netmasks) to not dial. By default, IPFS nodes advertise all addresses, even internal ones. This makes it easier for nodes on the same network to reach each other. Unfortunately, this means that an IPFS node will try to connect to one or more private IP addresses whenever dialing another node, even if this other node is on a different network. This may trigger netscan alerts on some hosting providers or cause strain in some setups.
The server
configuration profile fills up this list with sensible defaults,
preventing dials to all non-routable IP addresses (e.g., 192.168.0.0/16
) but
you should always check settings against your own network and/or hosting
provider.
A boolean value that when set to true, will cause ipfs to not keep track of bandwidth metrics. Disabling bandwidth metrics can lead to a slight performance improvement, as well as a reduction in memory usage.
Disable automatic NAT port forwarding.
When not disabled (default), go-ipfs asks NAT devices (e.g., routers), to open up an external port and forward it to the port go-ipfs is running on. When this works (i.e., when your router supports NAT port forwarding), it makes the local go-ipfs node accessible from the public internet.
Disables the p2p-circuit relay transport. This will prevent this node from connecting to nodes behind relays, or accepting connections from nodes behind relays.
Configures this node to act as a relay "hop". A relay "hop" relays traffic for other peers.
WARNING: Do not enable this option unless you know what you're doing. Other peers will randomly decide to use your node as a relay and consume all available bandwidth. There is no rate-limiting.
Enables "automatic relay" mode for this node. This option does two very
different things based on the Swarm.EnableRelayHop
. See
#7228 for context.
If Swarm.EnableAutoRelay
is enabled and Swarm.EnableRelayHop
is disabled,
your node will automatically use public relays from the network if it detects
that it cannot be reached from the public internet (e.g., it's behind a
firewall). This is likely the feature you're looking for.
If you enable EnableAutoRelay
, you should almost certainly disable
EnableRelayHop
.
If EnableAutoRelay
is enabled and EnableRelayHop
is enabled, your node will
act as a public relay for the network. Furthermore, in addition to simply
relaying traffic, your node will advertise itself as a public relay. Unless you
have the bandwidth of a small ISP, do not enable both of these options at the
same time.
REMOVED
Please use [AutoNAT.ServiceMode
][].
The connection manager determines which and how many connections to keep and can be configured to keep.
Sets the type of connection manager to use, options are: "none"
(no connection
management) and "basic"
.
LowWater is the minimum number of connections to maintain.
HighWater is the number of connections that, when exceeded, will trigger a connection GC operation.
GracePeriod is a time duration that new connections are immune from being closed by the connection manager.
The "basic" connection manager tries to keep between LowWater
and HighWater
connections. It works by:
- Keeping all connections until
HighWater
connections is reached. - Once
HighWater
is reached, it closes connections untilLowWater
is reached. - To prevent thrashing, it never closes connections established within the
GracePeriod
.
Example:
{
"Swarm": {
"ConnMgr": {
"Type": "basic",
"LowWater": 100,
"HighWater": 200,
"GracePeriod": "30s"
}
}
}