diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml new file mode 100644 index 000000000..3ec336a84 --- /dev/null +++ b/.pre-commit-config.yaml @@ -0,0 +1,11 @@ +repos: +- repo: https://github.com/igorshubovych/markdownlint-cli + rev: v0.32.2 + hooks: + - id: markdownlint + +repos: +- repo: https://github.com/tcort/markdown-link-check + rev: v3.10.3 + hooks: + - id: markdown-link-check \ No newline at end of file diff --git a/README.md b/README.md index 1d874ab78..6805686ec 100644 --- a/README.md +++ b/README.md @@ -26,7 +26,7 @@ - [Static-site generator](#static-site-generator) - [Automated deployments](#automated-deployments) - [Translation](#translation) -- [Core members](#core-members) +- [Core members](#primary-maintainers) - [License](#license) @@ -149,7 +149,7 @@ Please stay tuned on the steps to translate the documentation. - [@johnnymatthews](https://github.com/johnnymatthews): Project leadership & organization - [@cwaring](https://github.com/cwaring): Development support - [@2color](https://github.com/2color): Developer relations & technical writing(ecosystem) -- [@DannyS03](https://github.com/DannyS03}: Technical writing(engineering) +- [@DannyS03](https://github.com/DannyS03): Technical writing(engineering) - [@jennijuju](https://github.com/jennijuju): Management and supervision ## License diff --git a/docs/basics/README.md b/docs/basics/README.md index f12ca6eff..8aaa9b71e 100644 --- a/docs/basics/README.md +++ b/docs/basics/README.md @@ -17,7 +17,7 @@ Have an idea of what IPFS is but haven't really used it before? You should start ![An IPFS daemon running in a terminal window.](./images/ipfs-command-line.png) -If you're a bit more serious about IPFS and want to start poking around the command-line interface (CLI), then this section is for you. No buttons or images here; [just good-old-fashioned CLI interfaces and pipeable commands →](./command-line.md) +If you're a bit more serious about IPFS and want to start poking around the command-line interface (CLI), then this section is for you. No buttons or images here; [just good-old-fashioned CLI interfaces and pipeable commands →](../install/command-line.md) ## Other implementations diff --git a/docs/basics/desktop-app.md b/docs/basics/desktop-app.md index 38c63123e..466224e19 100644 --- a/docs/basics/desktop-app.md +++ b/docs/basics/desktop-app.md @@ -8,12 +8,12 @@ description: "A simple walkthrough of the basic functions of the IPFS desktop ap This guide will walk you through the basics of IPFS Desktop and teach you how to add, remove, and download a file using IPFS. This guide will only cover the basics and will avoid talking about more complex concepts. :::tip Use the glossary -Some of these terms might be unfamiliar to you, and that's ok! Just check out the [glossary](../concepts/glossary/)! There you'll find definitions of all the common terms used when talking about IPFS. +Some of these terms might be unfamiliar to you, and that's ok! Just check out the [glossary](../concepts/glossary.md)! There you'll find definitions of all the common terms used when talking about IPFS. ::: ## Install IPFS Desktop -Installation instructions for [macOS](../install/ipfs-desktop/#macos), [Ubuntu](../install/ipfs-desktop/#ubuntu), and [Windows](../install/ipfs-desktop/#windows). +Installation instructions for [macOS](../install/ipfs-desktop.md#macos), [Ubuntu](../install/ipfs-desktop.md#ubuntu), and [Windows](../install/ipfs-desktop.md#windows). The installation guides linked above are straightforward and easy to follow; simply follow the instructions that correspond to your operating system, and you will have IPFS Desktop going in just a few minutes. diff --git a/docs/basics/go/command-line.md b/docs/basics/go/command-line.md index 925da4b68..876ae1639 100644 --- a/docs/basics/go/command-line.md +++ b/docs/basics/go/command-line.md @@ -7,13 +7,13 @@ description: "A simple walkthrough of how to perform basic IPFS operations using This short guide aims to walk you through the **basics of using IPFS with the Kubo CLI**. Kubo is [one of multiple IPFS implementations](../ipfs-implementations.md). It is the oldest IPFS implementation and exposes a CLI (among other things). -You will learn how to add, retrieve, read, and remove files within the CLI. If you are unsure about the meaning of some terms, you can check out the [glossary](../concepts/glossary.md). +You will learn how to add, retrieve, read, and remove files within the CLI. If you are unsure about the meaning of some terms, you can check out the [glossary](../../concepts/glossary.md). All instructions and examples shown here were performed and tested on an M1 Mac. However, the IPFS commands are the same on Linux, macOS, and Windows. You will need to know how to navigate your computer's directories from within the CLI. If you're unsure how to use the CLI, we recommend learning how before continuing with this guide. ## Install Kubo -Next up, we need to install Kubo for the command-line. We have a great guide that will walk you through how to [install Kubo with the CLI](../install/command-line.md). +Next up, we need to install Kubo for the command-line. We have a great guide that will walk you through how to [install Kubo with the CLI](../../install/command-line.md). Once you have Kubo installed, we need to get our node up and running. If this is your first time using Kubo, you will first need to initialize the configuration files: @@ -191,7 +191,7 @@ We can _pin_ data we want to save to our IPFS node to ensure we don't lose this pinned bafybeif2ewg3nqa33mjokpxii36jj2ywfqjpy3urdh7v6vqyfjoocvgy3a recursively ``` -By default, objects that you retrieve over IPFS are not pinned to your node. If you wish to prevent the files from being garbage collected, you need to pin them. You will notice that the pin you just added is a `recursive` pin, meaning it is a directory containing other objects. Check out the [Pinning page to learn more about how this works](../concepts/persistence.md). +By default, objects that you retrieve over IPFS are not pinned to your node. If you wish to prevent the files from being garbage collected, you need to pin them. You will notice that the pin you just added is a `recursive` pin, meaning it is a directory containing other objects. Check out the [Pinning page to learn more about how this works](../../concepts/persistence.md). ## Remove a file diff --git a/docs/concepts/bitswap.md b/docs/concepts/bitswap.md index 58b526c8b..f51dad28b 100644 --- a/docs/concepts/bitswap.md +++ b/docs/concepts/bitswap.md @@ -10,7 +10,7 @@ related: # Bitswap -Bitswap is a core module of IPFS for exchanging blocks of data. It directs the requesting and sending of blocks to and from other peers in the network. Bitswap is a _message-based protocol_ where all messages contain [want-lists](#want-list) or blocks. Bitswap has a [Go implementation](https://github.com/ipfs/go-bitswap) and a [JavaScript implementation](https://github.com/ipfs/js-ipfs-bitswap). +Bitswap is a core module of IPFS for exchanging blocks of data. It directs the requesting and sending of blocks to and from other peers in the network. Bitswap is a _message-based protocol_ where all messages contain [want-lists](#want-lists) or blocks. Bitswap has a [Go implementation](https://github.com/ipfs/go-bitswap) and a [JavaScript implementation](https://github.com/ipfs/js-ipfs-bitswap). Bitswap has two main jobs: @@ -19,7 +19,11 @@ Bitswap has two main jobs: ## How Bitswap works -IPFS breaks up files into chunks of data called _blocks_. These blocks are identified by a [content identifier (CID)](/concepts/content-addressing). When nodes running the Bitswap protocol want to fetch a file, they send out `want-lists` to other peers. A `want-list` is a list of CIDs for blocks a peer wants to receive. Each node remembers which blocks its peers want. Each time a node receives a block, it checks if any of its peers want the block and sends it to them if they do. +IPFS breaks up files into chunks of data called _blocks_. These blocks are identified by a [content identifier (CID)](/concepts/content-addressing.md). + +### Want Lists + +When nodes running the Bitswap protocol want to fetch a file, they send out `want-lists` to other peers. A `want-list` is a list of CIDs for blocks a peer wants to receive. Each node remembers which blocks its peers want. Each time a node receives a block, it checks if any of its peers want the block and sends it to them if they do. Here is a simplifed version of a `want-list`: @@ -31,13 +35,13 @@ Want-list { } ``` -#### Discovery +### Discovery To find peers that have a file, a node running the Bitswap protocol first sends a request called a _want-have_ to all the peers it is connected to. This _want-have_ request contains the CID of the root block of the file (the root block is at the top of the DAG of blocks that make up the file). Peers that have the root block send a _have_ response and are added to a session. Peers that don't have the block send a _dont-have_ response. Bitswap builds up a map of which nodes have and don't have each block. ![Diagram of the _want-have/want-block_ process.](./images/bitswap/diagram-of-the-want-have-want-block-process.png =740x537) -#### Transfer +### Transfer Bitswap sends _want-block_ to peers that have the block, and they respond with the block itself. If none of the peers have the root block, Bitswap queries the Distributed Hash Table (DHT) to ask who can provide the root block. diff --git a/docs/concepts/case-study-arbol.md b/docs/concepts/case-study-arbol.md index be732d561..f587a8ec9 100644 --- a/docs/concepts/case-study-arbol.md +++ b/docs/concepts/case-study-arbol.md @@ -80,13 +80,13 @@ Arbol's end users enjoy the "it just works" benefits of parametric protection, b 4. **Compression:** This step is the final one before data is imported to IPFS. Arbol compresses each file to save on disk space and reduce sync time. -5. **Hashing:** Arbol uses the stock IPFS recursive add operation ([`ipfs add -r`](./reference/kubo/cli/#ipfs-add)) for hashing, as well as the experimental `no-copy` feature. This feature cuts down on disk space used by the hashing node, especially on the initial build of the dataset. Without it, an entire dataset would be copied into the local IPFS datastore directory. This can create problems, since the default flat file system datastore (`flatfs`) can start to run out of index nodes (the software representation of disk locations) after a few million files, leading to hashing failure. Arbol is also experimenting with [Badger](https://github.com/ipfs/kubo/releases/tag/v0.5.0), an alternative to flat file storage, in collaboration with the IPFS core team as the core team considers incorporating this change into IPFS itself. +5. **Hashing:** Arbol uses the stock IPFS recursive add operation ([`ipfs add -r`](../reference/kubo/cli.md#ipfs-add)) for hashing, as well as the experimental `no-copy` feature. This feature cuts down on disk space used by the hashing node, especially on the initial build of the dataset. Without it, an entire dataset would be copied into the local IPFS datastore directory. This can create problems, since the default flat file system datastore (`flatfs`) can start to run out of index nodes (the software representation of disk locations) after a few million files, leading to hashing failure. Arbol is also experimenting with [Badger](https://github.com/ipfs/kubo/releases/tag/v0.5.0), an alternative to flat file storage, in collaboration with the IPFS core team as the core team considers incorporating this change into IPFS itself. 6. **Verification:** To ensure no errors were introduced to files during the parsing stage, queries are made to the source data files and compared against the results of an identical query made to the parsed, hashed data. 7. **Publishing:** Once a hash has been verified, it is posted to Arbol's master heads reference file, and is at this point accessible via Arbol's gateway and available for use in contracts. -8. **Pinning and syncing:** When storage nodes in the Arbol network detect that a new hash has been added to the heads file, they run the standard, recursive [`ipfs pin -r`](./reference/kubo/cli.md#ipfs-pin) command on it. Arbol's primary active nodes don't need to be large in number: The network includes a single [gateway node](ipfs-gateway.md) that bootstraps with all the parsing/hashing nodes, and a few large storage nodes that serve as the primary data storage backup. However, data is also regularly synced with "cold nodes" — archival storage nodes that are mostly kept offline — as well as on individual IPFS nodes on Arbol's developers' and agronomists' personal computers. +8. **Pinning and syncing:** When storage nodes in the Arbol network detect that a new hash has been added to the heads file, they run the standard, recursive [`ipfs pin -r`](../reference/kubo/cli.md#ipfs-pin) command on it. Arbol's primary active nodes don't need to be large in number: The network includes a single [gateway node](ipfs-gateway.md) that bootstraps with all the parsing/hashing nodes, and a few large storage nodes that serve as the primary data storage backup. However, data is also regularly synced with "cold nodes" — archival storage nodes that are mostly kept offline — as well as on individual IPFS nodes on Arbol's developers' and agronomists' personal computers. 9. **Garbage collection:** Some older Arbol datasets require [garbage collection](glossary.md#garbage-collection) whenever new data is added, due to a legacy method of overwriting old hashes with new hashes. However, all of Arbol's newer datasets use an architecture where old hashes are preserved and new posts reference the previous post. This methodology creates a linked list of hashes, with each hash containing a reference to the previous hash. As the length of the list becomes computationally burdensome, the system consolidates intermediate nodes and adds a new route to the head, creating a [DAG (directed acyclic graph)](merkle-dag.md) structure. Heads are always stored in a master [heads.json reference file](https://gateway.arbolmarket.com/climate/hashes/heads.json) located on Arbol's command server. diff --git a/docs/concepts/case-study-audius.md b/docs/concepts/case-study-audius.md index 914b2dc9c..592e4c6c1 100644 --- a/docs/concepts/case-study-audius.md +++ b/docs/concepts/case-study-audius.md @@ -58,7 +58,7 @@ One other key ingredient in Audius' decision to initially adopt IPFS was its sep As a large user of the IPFS network, Audius has taken advantage of the [official IPFS forums](https://discuss.ipfs.tech), as well as support provided directly from the core IPFS development team. They are particularly impressed with the level of support and third-party tools that are available on IPFS. -"We think about the IPFS and Filecoin community as a great role model for what we are doing with the community around Audius, in terms of activity and robustness," says Nagaraj. "There are a lot of developers who are constantly contributing to IPFS. A few post on websites like [simpleaswater.com](http://www.simpleaswater.com) with tons of examples of what you can do with IPFS, how to actually implement it, breaking down all the details. We would aim for something like that. It would be incredible for us if we could reach that level of community participation." Nagaraj also calls out as particularly helpful blog posts and other content created by third-party contributors to the codebase, as well as the ecosystem that is developing around IPFS collaborators such as [Textile](http://textile.io/) and [Pinata](https://pinata.cloud/). Having such an active community around an open-source project adds to the momentum and progress of IPFS as a whole. +"We think about the IPFS and Filecoin community as a great role model for what we are doing with the community around Audius, in terms of activity and robustness," says Nagaraj, who calls out blog posts and other content created by third-party contributors to the codebase as particularly helpful, as well as the ecosystem that is developing around IPFS collaborators such as [Textile](http://textile.io/) and [Pinata](https://pinata.cloud/). Having such an active community around an open-source project adds to the momentum and progress of IPFS as a whole. ## IPFS benefits diff --git a/docs/concepts/case-study-likecoin.md b/docs/concepts/case-study-likecoin.md index c6ef1118c..46b4fcccd 100644 --- a/docs/concepts/case-study-likecoin.md +++ b/docs/concepts/case-study-likecoin.md @@ -57,7 +57,7 @@ Content shared within Liker Land is stored and delivered using IPFS's distribute _— Chung Wu, chief researcher, LikeCoin_ ::: -The end-user workflow is simple and intuitive: Content creators, curators, and consumers take part in the LikeCoin ecosystem by using the free [Liker Land](https://liker.land/getapp) app, a reader and wallet for engaging with content. LikeCoin also offers browser extensions for [Chromium](https://chrome.google.com/webstore/detail/liker-land/cjjcemdmkddjbofomfgjedpiifpgkjhe?hl=en) (Chrome and Brave) and [Firefox](https://addons.mozilla.org/en-US/firefox/addon/liker-land/?src=search), so users can add material to their Liker Land reading lists on the fly. Outside the Liker Land app, creators can collect likes directly from WordPress, Medium, and other common blogging platforms using an easy-to-implement LikeCoin button plugin. +The end-user workflow is simple and intuitive: Content creators, curators, and consumers take part in the LikeCoin ecosystem by using the free [Liker Land](https://liker.land/getapp) app, a reader and wallet for engaging with content. LikeCoin also offers a browser extension for [Chromium](https://chrome.google.com/webstore/detail/liker-land/cjjcemdmkddjbofomfgjedpiifpgkjhe?hl=en) (Chrome and Brave), so users can add material to their Liker Land reading lists on the fly. Outside the Liker Land app, creators can collect likes directly from WordPress, Medium, and other common blogging platforms using an easy-to-implement LikeCoin button plugin. As a "free republic" of content creators, curators and publishers, and consumers, Liker Land also operates as a decentralized autonomous organization (DAO) with a [Cosmos](https://cosmos.network/)-based bonded proof-of-stake mechanism in which every "citizen" of Liker Land participates in blockchain governance. Acquiring more LikeCoin increases a user's voting power in Liker Land. While taking part in Liker Land is free of charge, users can also become Civic Likers — the Liker Land equivalent of "taxpayers" — for a flat monthly rate, acting as ongoing supporters whose contributions fund creators. diff --git a/docs/concepts/case-study-snapshot.md b/docs/concepts/case-study-snapshot.md index acfbb3760..9ff794010 100644 --- a/docs/concepts/case-study-snapshot.md +++ b/docs/concepts/case-study-snapshot.md @@ -93,9 +93,9 @@ These voting systems are used to calculate the results of a vote based on the vo ## How Snapshot uses IPFS -Snapshot uses IPFS to make the whole voting process fully transparent and auditable. Every space, proposal, vote, and user action is added to IPFS and has a [content identifier (CID)](/concepts/content-addressing/). +Snapshot uses IPFS to make the whole voting process fully transparent and auditable. Every space, proposal, vote, and user action is added to IPFS and has a [content identifier (CID)](/concepts/content-addressing.md). -Additionally, the Snapshot UI is also [available on IPFS](https://bafybeihzjoqahhgrhnsksyfubnlmjvkt66aliodeicywwtofodeuo2icde.ipfs.dweb.link/) and linked using the ENS name `shot.eth` which is accessible via any ENS resolution service, e.g. [shot.eth.limo](https://shot.eth.limo/), and [shot.eth.link](https://shot.eth.link/) (see the `x-ipfs-path` and `X-Ipfs-Roots` headers when making an HTTP request.) +Additionally, the Snapshot UI is also [available on IPFS](https://bafybeihzjoqahhgrhnsksyfubnlmjvkt66aliodeicywwtofodeuo2icde.ipfs.dweb.link/) and linked using the ENS name `shot.eth` which is accessible via any ENS resolution service, e.g. [shot.eth.limo](https://shot.eth.limo/)(see the `x-ipfs-path` and `X-Ipfs-Roots` headers when making an HTTP request.) To understand how Snapshot uses IPFS, it's useful to understand how the whole architecture was designed. Snapshot is a hybrid app combining design patterns common to Web2 and Web3 apps, and is based on the three-tier architecture: @@ -115,7 +115,7 @@ pineapple.js exposes a `pin` method that takes a JSON object and sends it to the ### Open access via IPFS Gateways -After data is added to the IPFS network via pinning services, it is also made available for viewing by users via an [IPFS Gateway](/concepts/ipfs-gateway/). Links to the signed messages for [proposals](https://snapshot.mypinata.cloud/ipfs/bafkreigva2y23hnepirhvup2widmawmjiih3kvvuaph3a7mrivkiqcvuki) and [votes](https://snapshot.mypinata.cloud/ipfs/bafkreibozdzgw5y5piburro6pxspw7yjcdaymj3fyqjl2rohsthnqfwc6e) are integrated into the Snapshot UI. +After data is added to the IPFS network via pinning services, it is also made available for viewing by users via an [IPFS Gateway](/concepts/ipfs-gateway.md). Links to the signed messages for [proposals](https://snapshot.mypinata.cloud/ipfs/bafkreigva2y23hnepirhvup2widmawmjiih3kvvuaph3a7mrivkiqcvuki) and [votes](https://snapshot.mypinata.cloud/ipfs/bafkreibozdzgw5y5piburro6pxspw7yjcdaymj3fyqjl2rohsthnqfwc6e) are integrated into the Snapshot UI. ## IPFS benefits diff --git a/docs/concepts/glossary.md b/docs/concepts/glossary.md index f31c633e4..42ecdfd9d 100644 --- a/docs/concepts/glossary.md +++ b/docs/concepts/glossary.md @@ -100,7 +100,7 @@ Version 1 (v1) of the IPFS content identifier. This CID version contains some le ### Circuit relay -A [libp2p](#libp2p) term for transport protocol that routes traffic between two peers over a third-party [_relay_ peer](#relay). [More about Circuit Relay](https://docs.libp2p.io/concepts/circuit-relay/). +A [libp2p](#libp2p) term for transport protocol that routes traffic between two peers over a third-party [_relay_ peer](#relay-node). [More about Circuit Relay](https://docs.libp2p.io/concepts/circuit-relay/). ### Circuit relay v1 @@ -146,7 +146,7 @@ DAG-CBOR is a [codec](#codec) that implements the [IPLD Data Model](https://ipld ## DAG-PB -DAG-PB is a [codec](#codec) that implements a very small subset of the [IPLD Data Model](https://ipld.io/glossary/#data-model) in a particular set of [Protobuf](#protobuf) messages used in IPFS for defining how [UnixFS](#UnixFS)v1 data is serialized. [More about DAG-PB](https://ipld.io/specs/codecs/dag-pb/spec/) +DAG-PB is a [codec](#codec) that implements a very small subset of the [IPLD Data Model](https://ipld.io/glossary/#data-model) in a particular set of [Protobuf](#protobuf) messages used in IPFS for defining how [UnixFS](#unixfs)v1 data is serialized. [More about DAG-PB](https://ipld.io/specs/codecs/dag-pb/spec/) ### Data model @@ -212,7 +212,7 @@ Old name of [Kubo](#kubo). ### Graph -In computer science, a Graph is an abstract data type from the field of graph theory within mathematics. The [Merkle-DAG](#merkledag) used in IPFS is a specialized graph. +In computer science, a Graph is an abstract data type from the field of graph theory within mathematics. The [Merkle-DAG](#merkle-dag) used in IPFS is a specialized graph. ### Graphsync diff --git a/docs/concepts/libp2p.md b/docs/concepts/libp2p.md index d72e19e84..5ed51ae1c 100644 --- a/docs/concepts/libp2p.md +++ b/docs/concepts/libp2p.md @@ -3,10 +3,10 @@ title: Libp2p sidebarDepth: 0 description: Learn about the Libp2p protocol and why it's an important ingredient in how IPFS works. related: - 'What is Libp2p?': https://docs.libp2p.io/introduction/what-is-libp2p/ + 'What is Libp2p?': https://docs.libp2p.io/introduction/#what-is-libp2p 'Foundational Libp2p concepts': https://docs.libp2p.io/concepts/ - 'Getting started with Libp2p': https://docs.libp2p.io/tutorials/getting-started/ - 'Examples of Libp2p key features': https://docs.libp2p.io/examples/ + 'Getting started with Libp2p': https://docs.libp2p.io/guides/ + 'Examples of Libp2p key features': https://docs.libp2p.io/guides/ --- # Libp2p @@ -29,23 +29,23 @@ Someone using Libp2p for the network layer of their peer-to-peer application is Libp2p works with a lot of different addressing schemes in a consistent way. A multiaddress (abbreviated [multiaddr](https://github.com/multiformats/multiaddr)) encodes multiple layers of addressing information into a single "future-proof" path structure. For example, `/ipv4/171.113.242.172/udp/162` indicates the use of the IPv4 protocol with the address 171.113.242.172, along with sending UDP packets to port 162. -#### [Transport](https://docs.libp2p.io/concepts/transport/) +#### [Transport](https://docs.libp2p.io/concepts/transports/overview/) The technology used to move your data from one machine to another. Transports are defined in terms of two core operations, _listening_ and _dialing_. Listening means that you can accept incoming connections from other peers. _Dialing_ is the process of opening an outgoing connection to a listening peer. One of Libp2p's core requirements is to be _transport agnostic_, meaning the decision of what transport protocol to use is up to an application's developer (who may decide to support many different _transports_ at the same time). -#### [Security](https://docs.libp2p.io/introduction/what-is-libp2p/#security) +#### [Security](https://docs.libp2p.io/concepts/secure-comm/overview/) -Libp2p supports upgrading a transport connection into a securely encrypted channel. You can then trust the identity of the peer you're communicating with and that no third-party can read the conversation or alter it in-flight. The current default is [TLS 1.3](https://www.ietf.org/blog/tls13/) as of IPFS 0.7. The previous default of [SECIO](https://docs.libp2p.io/concepts/secure-comms/) is now deprecated and disabled by default (see [this blog post](https://blog.ipfs.tech/2020-08-07-deprecating-secio/) for more information). +Libp2p supports upgrading a transport connection into a securely encrypted channel. You can then trust the identity of the peer you're communicating with and that no third-party can read the conversation or alter it in-flight. The current default is [TLS 1.3](https://www.ietf.org/blog/tls13/) as of IPFS 0.7. The previous default of [SECIO](https://docs.libp2p.io/concepts/secure-comm/overview) is now deprecated and disabled by default (see [this blog post](https://blog.ipfs.tech/2020-08-07-deprecating-secio/) for more information). -#### [Peer identity](https://docs.libp2p.io/concepts/peer-id/) +#### [Peer identity](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) A Peer Identity ([often written _PeerId_](https://docs.libp2p.io/reference/glossary/#peerid)) is a unique reference to a specific peer on the peer-to-peer network. Each Libp2p peer has a private key, which it keeps secret from all other peers, and a corresponding public key, which is shared with other peers. The PeerId is a [cryptographic hash](https://en.wikipedia.org/wiki/Cryptographic_hash_function) of a peer's public key. PeerIds are encoded using the [multihash](https://docs.libp2p.io/reference/glossary/#multihash) format. -#### [Peer routing](https://docs.libp2p.io/introduction/what-is-libp2p/#peer-routing) +#### [Peer routing](https://docs.libp2p.io/introduction/#peer-routing) Peer Routing is the process of discovering peer addresses by using the knowledge of other peers. In a peer routing system, a peer can either give us the address we need if they have it or else send our inquiry to another peer who's more likely to have the answer. Peer Routing in Libp2p uses a [distributed hash table](https://docs.libp2p.io/reference/glossary/#dht) to iteratively route requests closer to the desired PeerId using the [Kademlia](https://en.wikipedia.org/wiki/Kademlia) routing algorithm. -#### [Content discovery](https://docs.libp2p.io/introduction/what-is-libp2p/#content-discovery) +#### [Content discovery](https://docs.libp2p.io/concepts/introduction/overview/#content-discovery) In Content discovery, you ask for some specific piece of data, but you don't care who sends it since you're able to verify its integrity. Libp2p provides a [content routing interface](https://github.com/libp2p/interface-content-routing) for this purpose, with the primary stable implementation using the same [Kademlia](https://en.wikipedia.org/wiki/Kademlia)-based DHT as used in peer routing. @@ -53,23 +53,23 @@ In Content discovery, you ask for some specific piece of data, but you don't car Network Address Translation (NAT) allows you to move traffic seamlessly between network boundaries. NAT maps an address from one address space to another. While NAT is usually transparent for outgoing connections, listening for incoming connections requires some configuration. Libp2p has the following main approaches to NAT traversal available: [Automatic router configuration](https://docs.libp2p.io/concepts/nat/#automatic-router-configuration), [Hole punching (STUN)](https://docs.libp2p.io/concepts/nat/#hole-punching-stun), [AutoNAT](https://docs.libp2p.io/concepts/nat/#autonat), and [Circuit Relay (TURN)](https://docs.libp2p.io/concepts/nat/#circuit-relay-turn). -#### [Protocol](https://docs.libp2p.io/concepts/protocols/) +#### [Protocol](https://docs.libp2p.io/concepts/fundamentals/protocols) -These are the protocols built with Libp2p itself, using core Libp2p abstractions like [transport](https://docs.libp2p.io/concepts/transport/), [peer identity](https://docs.libp2p.io/concepts/peer-id/), and [addressing](https://docs.libp2p.io/concepts/addressing/). Each Libp2p protocol has a unique string identifier used in the [protocol negotiation](https://docs.libp2p.io/concepts/protocols/#protocol-negotiation) process when connections are first opened. The core Libp2p protocols are [Ping](https://docs.libp2p.io/concepts/protocols/#ping), [Identify](https://docs.libp2p.io/concepts/protocols/#identify), [secio](https://docs.libp2p.io/concepts/protocols/#secio), [kad-dht](https://docs.libp2p.io/concepts/protocols/#kad-dht), and [Circuit Relay](https://docs.libp2p.io/concepts/protocols/#circuit-relay). +These are the protocols built with Libp2p itself, using core Libp2p abstractions like [transport](https://docs.libp2p.io/concepts/transports/overview), [peer identity](https://docs.libp2p.io/reference/glossary/#peerid), and [addressing](https://docs.libp2p.io/concepts/addressing/). Each Libp2p protocol has a unique string identifier used in the [protocol negotiation](https://docs.libp2p.io/concepts/protocols/#protocol-negotiation) process when connections are first opened. The core Libp2p protocols are [Ping](https://docs.libp2p.io/concepts/protocols/#ping), [Identify](https://docs.libp2p.io/concepts/protocols/#identify), [secio](https://docs.libp2p.io/concepts/protocols/#secio), [kad-dht](https://docs.libp2p.io/concepts/protocols/#kad-dht), and [Circuit Relay](https://docs.libp2p.io/concepts/protocols/#circuit-relay). #### [Stream multiplexing](https://docs.libp2p.io/concepts/stream-multiplexing/) Often abbreviated as _stream muxing_, this allows multiple independent logical streams to all share a common underlying transport medium. Libp2p's stream multiplexer sits "above" the transport stack and allows many streams to flow over a single TCP port or other raw transport connection. The current stream multiplexing implementations are [mplex](https://docs.libp2p.io/concepts/stream-multiplexing/#mplex), [yamux](https://docs.libp2p.io/concepts/stream-multiplexing/#yamux), [quic](https://docs.libp2p.io/concepts/stream-multiplexing/#quic), [spdy](https://docs.libp2p.io/concepts/stream-multiplexing/#spdy), and [muxado](https://docs.libp2p.io/concepts/stream-multiplexing/#muxado). -#### [Publish and subscribe](https://docs.libp2p.io/concepts/publish-subscribe/) +#### [Publish and subscribe](https://docs.libp2p.io/concepts/pubsub/overview/) Often abbreviated as _pub-sub_, this is a system where peers congregate around topics they are interested in. Peers interested in a topic are said to be subscribed to that topic. Peers send messages to topics, which get delivered to all peers subscribed to the topic. Example uses of pub-sub are chat rooms and file sharing. For more detail and a discussion of other pub-sub designs, see the [gossipsub specification](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/README.md). ## Additional resources -- [What is Libp2p?](https://docs.libp2p.io/introduction/what-is-libp2p/) +- [What is Libp2p?](https://docs.libp2p.io/introduction/#what-is-libp2p) - [Introduction to Libp2p](https://www.youtube.com/embed/CRe_oDtfRLw) -- [Getting started with Libp2p](https://docs.libp2p.io/tutorials/getting-started/) +- [Getting started with Libp2p](https://docs.libp2p.io/guides/) - [The Libp2p glossary](https://docs.libp2p.io/reference/glossary/) - [The Libp2p specification](https://github.com/libp2p/specs) - [ResNetLab on Tour - Content Routing](https://research.protocol.ai/tutorials/resnetlab-on-tour/content-routing/) diff --git a/docs/concepts/nodes.md b/docs/concepts/nodes.md index a928a3484..50954c8d4 100644 --- a/docs/concepts/nodes.md +++ b/docs/concepts/nodes.md @@ -27,7 +27,7 @@ There are different types of IPFS nodes. And depending on the use-case, a single - [Preload](#preload) - [Relay](#relay) - [Bootstrap](#bootstrap) -- [Delegate routing](#delegate-routing) +- [Delegate routing](#delegate-routing-node) ### Preload @@ -38,7 +38,7 @@ Features of a preload node: - They are Kubo nodes with API ports exposed. Some HTTP API commands are accessible. - Used by JS-IPFS nodes running in browser contexts. - JS-ipfs nodes remain connected to the libp2p swarm ports of all preload nodes by having preload nodes on the bootstrap list. -- Often on the same _server_ as a [delegate routing node](#delegate-routing), though both the delegate routing service and preload service are addressed differently. This is done by having different multiaddrs that resolve to the same machine. +- Often on the same _server_ as a [delegate routing node](#delegate-routing-node), though both the delegate routing service and preload service are addressed differently. This is done by having different multiaddrs that resolve to the same machine. - Preload nodes are in the default JS-IPFS configuration as bootstrap nodes, so they will maintain libp2p swarm connections to them at all times. - They are configured as regular bootstrap nodes, but as a convention have the string 'preload' in their `/dnsaddr` multiaddrs. @@ -85,7 +85,7 @@ Limitations of a bootstrap node: ### Delegate routing node -When IPFS nodes are unable to run Distributed Hash Table (DHT) logic on their own, they _delegate_ the task to a delegate routing node. Publishing works with arbitrary CID codecs (compression/decompression technology), as the [js-delegate-content module](https://github.com/libp2p/js-libp2p-delegated-content-routing/blob/master/src/index.js#L127-L128) publishes CIDs at the block level rather than the IPLD or DAG level. +When IPFS nodes are unable to run Distributed Hash Table (DHT) logic on their own, they _delegate_ the task to a delegate routing node. Publishing works with arbitrary CID codecs (compression/decompression technology), as the [js-delegate-content module](https://github.com/libp2p/js-libp2p-delegated-content-routing/blob/master/src/index.ts) publishes CIDs at the block level rather than the IPLD or DAG level. Features of a delegate routing node: diff --git a/docs/how-to/browser-tools-frameworks.md b/docs/how-to/browser-tools-frameworks.md index acebe7720..9ed1e49d7 100644 --- a/docs/how-to/browser-tools-frameworks.md +++ b/docs/how-to/browser-tools-frameworks.md @@ -9,7 +9,7 @@ Want to learn how to use IPFS in combination with your favorite framework or bro ## Browserify -[Example and boilerplate](https://github.com/ipfs-examples/js-ipfs-examples/tree/master/examples/browser-browserify) you can use to guide yourself into bundling js-ipfs with browserify, so that you can use it in your own web app. +[Example and boilerplate](https://github.com/ipfs-examples/js-ipfs-browser-browserify) you can use to guide yourself into bundling js-ipfs with browserify, so that you can use it in your own web app. ## Parcel.js diff --git a/docs/how-to/create-simple-chat-app.md b/docs/how-to/create-simple-chat-app.md index 749733166..33209572f 100644 --- a/docs/how-to/create-simple-chat-app.md +++ b/docs/how-to/create-simple-chat-app.md @@ -19,7 +19,7 @@ The heading shows which user is chatting and has a status indicator in the top l - Yellow means you're only seeing direct peers (no other peer in the middle). - Red means you have no peers (at least none using the chat application). -To see a live demo, start your ipfs daemon (open IPFS Desktop or enter ipfs daemon in the CLI) and have a chat buddy do the same. Then you can both open the [live demo](https://ipfs.io/ipfs/bafybeia5f2yk6td7ciroeped2uwfivo333b524t3zmoderfhl3xn7wi7aa/&sa=D&source=editors&ust=1651157762663308&usg=AOvVaw1sQEgWa5q7YI8HnLTPUq0Y) and chat. Once our chat app gets some traction, you’ll be able to make new friends on the network. +To see a live demo, start your ipfs daemon (open IPFS Desktop or enter ipfs daemon in the CLI) and have a chat buddy do the same. Then you can both open the [live demo](https://ipfs.io/ipfs/bafybeia5f2yk6td7ciroeped2uwfivo333b524t3zmoderfhl3xn7wi7aa/) and chat. Once our chat app gets some traction, you’ll be able to make new friends on the network. ## How it works diff --git a/docs/how-to/troubleshooting.md b/docs/how-to/troubleshooting.md index eec77d052..bd67f6f30 100644 --- a/docs/how-to/troubleshooting.md +++ b/docs/how-to/troubleshooting.md @@ -63,7 +63,7 @@ The first thing to do is to double-check that both nodes are, in fact, running a } ``` -Next, check to see if the nodes have a connection to each other. You can do this by running `ipfs swarm peers` on one node and checking for the other node's peer ID in the output. If the two nodes _are_ connected, and the `ipfs get` command is still hanging, then something unexpected is going on, and I recommend filing an issue about it. If they are not connected, then let's try and debug why. (Note: you can skip to [Manually connecting `node a` to `node b`](#manually-connecting-node-a-to-b) if you just want things to work. However, going through the debugging process and reporting what happened to the IPFS team on IRC is helpful to us to understand common pitfalls that people run into). +Next, check to see if the nodes have a connection to each other. You can do this by running `ipfs swarm peers` on one node and checking for the other node's peer ID in the output. If the two nodes _are_ connected, and the `ipfs get` command is still hanging, then something unexpected is going on, and I recommend filing an issue about it. If they are not connected, then let's try and debug why. (Note: you can skip to [Manually connecting `node a` to `node b`](#manually-connecting-node-a-to-node-b) if you just want things to work. However, going through the debugging process and reporting what happened to the IPFS team on IRC is helpful to us to understand common pitfalls that people run into). ### Checking providers diff --git a/docs/how-to/websites-on-ipfs/redirects-and-custom-404s.md b/docs/how-to/websites-on-ipfs/redirects-and-custom-404s.md index b4dc12968..969639288 100644 --- a/docs/how-to/websites-on-ipfs/redirects-and-custom-404s.md +++ b/docs/how-to/websites-on-ipfs/redirects-and-custom-404s.md @@ -8,7 +8,7 @@ description: What the _redirect file is and how to use them with a website or si This feature is new, and requires Kubo 0.16 or later. ::: -This feature enables support for redirects, [single-page applications](#catch-all-and-pwa-spa-support), [custom 404 pages](#add-a-custom-404-page-to-your-website), and moving to IPFS-backed hosting [without breaking existing links](https://www.w3.org/Provider/Style/URI). +This feature enables support for redirects, [single-page applications](#examples), [custom 404 pages](#add-a-custom-404-page-to-your-website), and moving to IPFS-backed hosting [without breaking existing links](https://www.w3.org/Provider/Style/URI). ## Evaluation diff --git a/docs/project/history.md b/docs/project/history.md index 11c89a31d..1a2ab27c3 100644 --- a/docs/project/history.md +++ b/docs/project/history.md @@ -62,7 +62,7 @@ According to [Uncle Ben](https://en.wikipedia.org/wiki/Uncle_Ben#%22With_great_p This focus bore significant results in the IPFS community in 2019. Protocol Labs hosted the [first IPFS Camp](https://camp.ipfs.io/) in Barcelona in June. The retreat brought together 150 distributed-web pioneers to learn, collaborate, and build. It inspired a [successful collaboration](https://blog.ipfs.tech/2020-02-14-improved-bitswap-for-container-distribution/) with one of the biggest, most innovative corporations in world, Netflix. By the end of 2019, the IPFS network had grown by more than 30x. The community of open-source contributors stood at more than 4,000. -The April 2020 [Kubo 0.5.0 release](https://blog.ipfs.tech/2020-04-28-kubo-0-5-0/) provided the largest performance upgrades to the network yet: faster file adding (2x), providing (2.5x), finding (2-6x), and fetching (2-5x). For the ever-growing [IPFS ecosystem](https://ipfs.io/images/ipfs-applications-diagram.png), reliability is just as important as speed. For that, Protocol Labs developed, used, and released [Testground](https://blog.ipfs.tech/2020-05-06-launching-testground/). Testground is a huge step forward in testing and hardening P2P systems not just for IPFS, but the community at-large. +The April 2020 [Kubo 0.5.0 release](https://github.com/ipfs/kubo/blob/master/docs/changelogs/v0.5.md) provided the largest performance upgrades to the network yet: faster file adding (2x), providing (2.5x), finding (2-6x), and fetching (2-5x). For the ever-growing [IPFS ecosystem](https://ipfs.io/images/ipfs-applications-diagram.png), reliability is just as important as speed. For that, Protocol Labs developed, used, and released [Testground](https://blog.ipfs.tech/2020-05-06-launching-testground/). Testground is a huge step forward in testing and hardening P2P systems not just for IPFS, but the community at-large. Major collaborations with [Opera](https://blog.ipfs.tech/2020-03-30-ipfs-in-opera-for-android/), [Microsoft ION](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/toward-scalable-decentralized-identifier-systems/ba-p/560168), and [Cloudflare](https://www.cloudflare.com/distributed-web-gateway/) just scratch the surface of possibilities for IPFS. The H2 2020 Filecoin Mainnet launch is poised to fundamentally shift economic incentives of the P2P IPFS network to compete with the entrenched client-server web.