Skip to content

Commit 16bb246

Browse files
authored
fix links (#1306)
* fix links * Fix link to command line install section * fix links to install subsections and concepts glossary * Fix want-list link and format headers * Fix links to ipfs add and ipfs pin * use md extension in path for links * fix relay, merkle-dag, unixfs links * Fix various broken links on libp2p page * Fix links on nodes page * Fix link to manually connecting node a to node b * fix link to catch all and pwa spa support * accidentally added a python swp file, this should not be here * Address point 2 in #1306 (comment) * Address point 3 in #1306 (comment) * Remove shot eth link * Update browserify link * Update link to chat app live demo * Link to Kubo 050 changelog instead of missing blog * Fix links to kubo cli install and glossary that Tmo found plus the another * Re-change links to LibP2P docs back to what they were originally per Danny's comments * Update links now that Doks theme is merged * Fix links per #1306 (comment)
1 parent 40e7140 commit 16bb246

18 files changed

+59
-44
lines changed

.pre-commit-config.yaml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
repos:
2+
- repo: https://github.com/igorshubovych/markdownlint-cli
3+
rev: v0.32.2
4+
hooks:
5+
- id: markdownlint
6+
7+
repos:
8+
- repo: https://github.com/tcort/markdown-link-check
9+
rev: v3.10.3
10+
hooks:
11+
- id: markdown-link-check

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@
2626
- [Static-site generator](#static-site-generator)
2727
- [Automated deployments](#automated-deployments)
2828
- [Translation](#translation)
29-
- [Core members](#core-members)
29+
- [Core members](#primary-maintainers)
3030
- [License](#license)
3131
<!-- /TOC -->
3232

@@ -149,7 +149,7 @@ Please stay tuned on the steps to translate the documentation.
149149
- [@johnnymatthews](https://github.com/johnnymatthews): Project leadership & organization
150150
- [@cwaring](https://github.com/cwaring): Development support
151151
- [@2color](https://github.com/2color): Developer relations & technical writing(ecosystem)
152-
- [@DannyS03](https://github.com/DannyS03}: Technical writing(engineering)
152+
- [@DannyS03](https://github.com/DannyS03): Technical writing(engineering)
153153
- [@jennijuju](https://github.com/jennijuju): Management and supervision
154154

155155
## License

docs/basics/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Have an idea of what IPFS is but haven't really used it before? You should start
1717

1818
![An IPFS daemon running in a terminal window.](./images/ipfs-command-line.png)
1919

20-
If you're a bit more serious about IPFS and want to start poking around the command-line interface (CLI), then this section is for you. No buttons or images here; [just good-old-fashioned CLI interfaces and pipeable commands →](./command-line.md)
20+
If you're a bit more serious about IPFS and want to start poking around the command-line interface (CLI), then this section is for you. No buttons or images here; [just good-old-fashioned CLI interfaces and pipeable commands →](../install/command-line.md)
2121

2222
## Other implementations
2323

docs/basics/desktop-app.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,12 @@ description: "A simple walkthrough of the basic functions of the IPFS desktop ap
88
This guide will walk you through the basics of IPFS Desktop and teach you how to add, remove, and download a file using IPFS. This guide will only cover the basics and will avoid talking about more complex concepts.
99

1010
:::tip Use the glossary
11-
Some of these terms might be unfamiliar to you, and that's ok! Just check out the [glossary](../concepts/glossary/)! There you'll find definitions of all the common terms used when talking about IPFS.
11+
Some of these terms might be unfamiliar to you, and that's ok! Just check out the [glossary](../concepts/glossary.md)! There you'll find definitions of all the common terms used when talking about IPFS.
1212
:::
1313

1414
## Install IPFS Desktop
1515

16-
Installation instructions for [macOS](../install/ipfs-desktop/#macos), [Ubuntu](../install/ipfs-desktop/#ubuntu), and [Windows](../install/ipfs-desktop/#windows).
16+
Installation instructions for [macOS](../install/ipfs-desktop.md#macos), [Ubuntu](../install/ipfs-desktop.md#ubuntu), and [Windows](../install/ipfs-desktop.md#windows).
1717

1818
The installation guides linked above are straightforward and easy to follow; simply follow the instructions that correspond to your operating system, and you will have IPFS Desktop going in just a few minutes.
1919

docs/basics/go/command-line.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@ description: "A simple walkthrough of how to perform basic IPFS operations using
77

88
This short guide aims to walk you through the **basics of using IPFS with the Kubo CLI**. Kubo is [one of multiple IPFS implementations](../ipfs-implementations.md). It is the oldest IPFS implementation and exposes a CLI (among other things).
99

10-
You will learn how to add, retrieve, read, and remove files within the CLI. If you are unsure about the meaning of some terms, you can check out the [glossary](../concepts/glossary.md).
10+
You will learn how to add, retrieve, read, and remove files within the CLI. If you are unsure about the meaning of some terms, you can check out the [glossary](../../concepts/glossary.md).
1111

1212
All instructions and examples shown here were performed and tested on an M1 Mac. However, the IPFS commands are the same on Linux, macOS, and Windows. You will need to know how to navigate your computer's directories from within the CLI. If you're unsure how to use the CLI, we recommend learning how before continuing with this guide.
1313

1414
## Install Kubo
1515

16-
Next up, we need to install Kubo for the command-line. We have a great guide that will walk you through how to [install Kubo with the CLI](../install/command-line.md).
16+
Next up, we need to install Kubo for the command-line. We have a great guide that will walk you through how to [install Kubo with the CLI](../../install/command-line.md).
1717

1818
Once you have Kubo installed, we need to get our node up and running. If this is your first time using Kubo, you will first need to initialize the configuration files:
1919

@@ -191,7 +191,7 @@ We can _pin_ data we want to save to our IPFS node to ensure we don't lose this
191191
pinned bafybeif2ewg3nqa33mjokpxii36jj2ywfqjpy3urdh7v6vqyfjoocvgy3a recursively
192192
```
193193

194-
By default, objects that you retrieve over IPFS are not pinned to your node. If you wish to prevent the files from being garbage collected, you need to pin them. You will notice that the pin you just added is a `recursive` pin, meaning it is a directory containing other objects. Check out the [Pinning page to learn more about how this works](../concepts/persistence.md).
194+
By default, objects that you retrieve over IPFS are not pinned to your node. If you wish to prevent the files from being garbage collected, you need to pin them. You will notice that the pin you just added is a `recursive` pin, meaning it is a directory containing other objects. Check out the [Pinning page to learn more about how this works](../../concepts/persistence.md).
195195

196196
## Remove a file
197197

docs/concepts/bitswap.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ related:
1010

1111
# Bitswap
1212

13-
Bitswap is a core module of IPFS for exchanging blocks of data. It directs the requesting and sending of blocks to and from other peers in the network. Bitswap is a _message-based protocol_ where all messages contain [want-lists](#want-list) or blocks. Bitswap has a [Go implementation](https://github.com/ipfs/go-bitswap) and a [JavaScript implementation](https://github.com/ipfs/js-ipfs-bitswap).
13+
Bitswap is a core module of IPFS for exchanging blocks of data. It directs the requesting and sending of blocks to and from other peers in the network. Bitswap is a _message-based protocol_ where all messages contain [want-lists](#want-lists) or blocks. Bitswap has a [Go implementation](https://github.com/ipfs/go-bitswap) and a [JavaScript implementation](https://github.com/ipfs/js-ipfs-bitswap).
1414

1515
Bitswap has two main jobs:
1616

@@ -19,7 +19,11 @@ Bitswap has two main jobs:
1919

2020
## How Bitswap works
2121

22-
IPFS breaks up files into chunks of data called _blocks_. These blocks are identified by a [content identifier (CID)](/concepts/content-addressing). When nodes running the Bitswap protocol want to fetch a file, they send out `want-lists` to other peers. A `want-list` is a list of CIDs for blocks a peer wants to receive. Each node remembers which blocks its peers want. Each time a node receives a block, it checks if any of its peers want the block and sends it to them if they do.
22+
IPFS breaks up files into chunks of data called _blocks_. These blocks are identified by a [content identifier (CID)](/concepts/content-addressing.md).
23+
24+
### Want Lists
25+
26+
When nodes running the Bitswap protocol want to fetch a file, they send out `want-lists` to other peers. A `want-list` is a list of CIDs for blocks a peer wants to receive. Each node remembers which blocks its peers want. Each time a node receives a block, it checks if any of its peers want the block and sends it to them if they do.
2327

2428
Here is a simplifed version of a `want-list`:
2529

@@ -31,13 +35,13 @@ Want-list {
3135
}
3236
```
3337

34-
#### Discovery
38+
### Discovery
3539

3640
To find peers that have a file, a node running the Bitswap protocol first sends a request called a _want-have_ to all the peers it is connected to. This _want-have_ request contains the CID of the root block of the file (the root block is at the top of the DAG of blocks that make up the file). Peers that have the root block send a _have_ response and are added to a session. Peers that don't have the block send a _dont-have_ response. Bitswap builds up a map of which nodes have and don't have each block.
3741

3842
![Diagram of the _want-have/want-block_ process.](./images/bitswap/diagram-of-the-want-have-want-block-process.png =740x537)
3943

40-
#### Transfer
44+
### Transfer
4145

4246
Bitswap sends _want-block_ to peers that have the block, and they respond with the block itself. If none of the peers have the root block, Bitswap queries the Distributed Hash Table (DHT) to ask who can provide the root block.
4347

docs/concepts/case-study-arbol.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -80,13 +80,13 @@ Arbol's end users enjoy the "it just works" benefits of parametric protection, b
8080

8181
4. **Compression:** This step is the final one before data is imported to IPFS. Arbol compresses each file to save on disk space and reduce sync time.
8282

83-
5. **Hashing:** Arbol uses the stock IPFS recursive add operation ([`ipfs add -r`](./reference/kubo/cli/#ipfs-add)) for hashing, as well as the experimental `no-copy` feature. This feature cuts down on disk space used by the hashing node, especially on the initial build of the dataset. Without it, an entire dataset would be copied into the local IPFS datastore directory. This can create problems, since the default flat file system datastore (`flatfs`) can start to run out of index nodes (the software representation of disk locations) after a few million files, leading to hashing failure. Arbol is also experimenting with [Badger](https://github.com/ipfs/kubo/releases/tag/v0.5.0), an alternative to flat file storage, in collaboration with the IPFS core team as the core team considers incorporating this change into IPFS itself.
83+
5. **Hashing:** Arbol uses the stock IPFS recursive add operation ([`ipfs add -r`](../reference/kubo/cli.md#ipfs-add)) for hashing, as well as the experimental `no-copy` feature. This feature cuts down on disk space used by the hashing node, especially on the initial build of the dataset. Without it, an entire dataset would be copied into the local IPFS datastore directory. This can create problems, since the default flat file system datastore (`flatfs`) can start to run out of index nodes (the software representation of disk locations) after a few million files, leading to hashing failure. Arbol is also experimenting with [Badger](https://github.com/ipfs/kubo/releases/tag/v0.5.0), an alternative to flat file storage, in collaboration with the IPFS core team as the core team considers incorporating this change into IPFS itself.
8484

8585
6. **Verification:** To ensure no errors were introduced to files during the parsing stage, queries are made to the source data files and compared against the results of an identical query made to the parsed, hashed data.
8686

8787
7. **Publishing:** Once a hash has been verified, it is posted to Arbol's master heads reference file, and is at this point accessible via Arbol's gateway and available for use in contracts.
8888

89-
8. **Pinning and syncing:** When storage nodes in the Arbol network detect that a new hash has been added to the heads file, they run the standard, recursive [`ipfs pin -r`](./reference/kubo/cli.md#ipfs-pin) command on it. Arbol's primary active nodes don't need to be large in number: The network includes a single [gateway node](ipfs-gateway.md) that bootstraps with all the parsing/hashing nodes, and a few large storage nodes that serve as the primary data storage backup. However, data is also regularly synced with "cold nodes" — archival storage nodes that are mostly kept offline — as well as on individual IPFS nodes on Arbol's developers' and agronomists' personal computers.
89+
8. **Pinning and syncing:** When storage nodes in the Arbol network detect that a new hash has been added to the heads file, they run the standard, recursive [`ipfs pin -r`](../reference/kubo/cli.md#ipfs-pin) command on it. Arbol's primary active nodes don't need to be large in number: The network includes a single [gateway node](ipfs-gateway.md) that bootstraps with all the parsing/hashing nodes, and a few large storage nodes that serve as the primary data storage backup. However, data is also regularly synced with "cold nodes" — archival storage nodes that are mostly kept offline — as well as on individual IPFS nodes on Arbol's developers' and agronomists' personal computers.
9090

9191
9. **Garbage collection:** Some older Arbol datasets require [garbage collection](glossary.md#garbage-collection) whenever new data is added, due to a legacy method of overwriting old hashes with new hashes. However, all of Arbol's newer datasets use an architecture where old hashes are preserved and new posts reference the previous post. This methodology creates a linked list of hashes, with each hash containing a reference to the previous hash. As the length of the list becomes computationally burdensome, the system consolidates intermediate nodes and adds a new route to the head, creating a [DAG (directed acyclic graph)](merkle-dag.md) structure. Heads are always stored in a master [heads.json reference file](https://gateway.arbolmarket.com/climate/hashes/heads.json) located on Arbol's command server.
9292

docs/concepts/case-study-audius.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ One other key ingredient in Audius' decision to initially adopt IPFS was its sep
5858

5959
As a large user of the IPFS network, Audius has taken advantage of the [official IPFS forums](https://discuss.ipfs.tech), as well as support provided directly from the core IPFS development team. They are particularly impressed with the level of support and third-party tools that are available on IPFS.
6060

61-
"We think about the IPFS and Filecoin community as a great role model for what we are doing with the community around Audius, in terms of activity and robustness," says Nagaraj. "There are a lot of developers who are constantly contributing to IPFS. A few post on websites like [simpleaswater.com](http://www.simpleaswater.com) with tons of examples of what you can do with IPFS, how to actually implement it, breaking down all the details. We would aim for something like that. It would be incredible for us if we could reach that level of community participation." Nagaraj also calls out as particularly helpful blog posts and other content created by third-party contributors to the codebase, as well as the ecosystem that is developing around IPFS collaborators such as [Textile](http://textile.io/) and [Pinata](https://pinata.cloud/). Having such an active community around an open-source project adds to the momentum and progress of IPFS as a whole.
61+
"We think about the IPFS and Filecoin community as a great role model for what we are doing with the community around Audius, in terms of activity and robustness," says Nagaraj, who calls out blog posts and other content created by third-party contributors to the codebase as particularly helpful, as well as the ecosystem that is developing around IPFS collaborators such as [Textile](http://textile.io/) and [Pinata](https://pinata.cloud/). Having such an active community around an open-source project adds to the momentum and progress of IPFS as a whole.
6262

6363
## IPFS benefits
6464

docs/concepts/case-study-likecoin.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Content shared within Liker Land is stored and delivered using IPFS's distribute
5757
_&mdash; Chung Wu, chief researcher, LikeCoin_
5858
:::
5959

60-
The end-user workflow is simple and intuitive: Content creators, curators, and consumers take part in the LikeCoin ecosystem by using the free [Liker Land](https://liker.land/getapp) app, a reader and wallet for engaging with content. LikeCoin also offers browser extensions for [Chromium](https://chrome.google.com/webstore/detail/liker-land/cjjcemdmkddjbofomfgjedpiifpgkjhe?hl=en) (Chrome and Brave) and [Firefox](https://addons.mozilla.org/en-US/firefox/addon/liker-land/?src=search), so users can add material to their Liker Land reading lists on the fly. Outside the Liker Land app, creators can collect likes directly from WordPress, Medium, and other common blogging platforms using an easy-to-implement LikeCoin button plugin.
60+
The end-user workflow is simple and intuitive: Content creators, curators, and consumers take part in the LikeCoin ecosystem by using the free [Liker Land](https://liker.land/getapp) app, a reader and wallet for engaging with content. LikeCoin also offers a browser extension for [Chromium](https://chrome.google.com/webstore/detail/liker-land/cjjcemdmkddjbofomfgjedpiifpgkjhe?hl=en) (Chrome and Brave), so users can add material to their Liker Land reading lists on the fly. Outside the Liker Land app, creators can collect likes directly from WordPress, Medium, and other common blogging platforms using an easy-to-implement LikeCoin button plugin.
6161

6262
As a "free republic" of content creators, curators and publishers, and consumers, Liker Land also operates as a decentralized autonomous organization (DAO) with a [Cosmos](https://cosmos.network/)-based bonded proof-of-stake mechanism in which every "citizen" of Liker Land participates in blockchain governance. Acquiring more LikeCoin increases a user's voting power in Liker Land. While taking part in Liker Land is free of charge, users can also become Civic Likers — the Liker Land equivalent of "taxpayers" — for a flat monthly rate, acting as ongoing supporters whose contributions fund creators.
6363

docs/concepts/case-study-snapshot.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -93,9 +93,9 @@ These voting systems are used to calculate the results of a vote based on the vo
9393

9494
## How Snapshot uses IPFS
9595

96-
Snapshot uses IPFS to make the whole voting process fully transparent and auditable. Every space, proposal, vote, and user action is added to IPFS and has a [content identifier (CID)](/concepts/content-addressing/).
96+
Snapshot uses IPFS to make the whole voting process fully transparent and auditable. Every space, proposal, vote, and user action is added to IPFS and has a [content identifier (CID)](/concepts/content-addressing.md).
9797

98-
Additionally, the Snapshot UI is also [available on IPFS](https://bafybeihzjoqahhgrhnsksyfubnlmjvkt66aliodeicywwtofodeuo2icde.ipfs.dweb.link/) and linked using the ENS name `shot.eth` which is accessible via any ENS resolution service, e.g. [shot.eth.limo](https://shot.eth.limo/), and [shot.eth.link](https://shot.eth.link/) (see the `x-ipfs-path` and `X-Ipfs-Roots` headers when making an HTTP request.)
98+
Additionally, the Snapshot UI is also [available on IPFS](https://bafybeihzjoqahhgrhnsksyfubnlmjvkt66aliodeicywwtofodeuo2icde.ipfs.dweb.link/) and linked using the ENS name `shot.eth` which is accessible via any ENS resolution service, e.g. [shot.eth.limo](https://shot.eth.limo/)(see the `x-ipfs-path` and `X-Ipfs-Roots` headers when making an HTTP request.)
9999

100100
To understand how Snapshot uses IPFS, it's useful to understand how the whole architecture was designed. Snapshot is a hybrid app combining design patterns common to Web2 and Web3 apps, and is based on the three-tier architecture:
101101

@@ -115,7 +115,7 @@ pineapple.js exposes a `pin` method that takes a JSON object and sends it to the
115115

116116
### Open access via IPFS Gateways
117117

118-
After data is added to the IPFS network via pinning services, it is also made available for viewing by users via an [IPFS Gateway](/concepts/ipfs-gateway/). Links to the signed messages for [proposals](https://snapshot.mypinata.cloud/ipfs/bafkreigva2y23hnepirhvup2widmawmjiih3kvvuaph3a7mrivkiqcvuki) and [votes](https://snapshot.mypinata.cloud/ipfs/bafkreibozdzgw5y5piburro6pxspw7yjcdaymj3fyqjl2rohsthnqfwc6e) are integrated into the Snapshot UI.
118+
After data is added to the IPFS network via pinning services, it is also made available for viewing by users via an [IPFS Gateway](/concepts/ipfs-gateway.md). Links to the signed messages for [proposals](https://snapshot.mypinata.cloud/ipfs/bafkreigva2y23hnepirhvup2widmawmjiih3kvvuaph3a7mrivkiqcvuki) and [votes](https://snapshot.mypinata.cloud/ipfs/bafkreibozdzgw5y5piburro6pxspw7yjcdaymj3fyqjl2rohsthnqfwc6e) are integrated into the Snapshot UI.
119119

120120
## IPFS benefits
121121

0 commit comments

Comments
 (0)