Skip to content

fix links #1306

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 25 commits into from
Nov 30, 2022
Merged
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
ad7c52b
fix links
ElPaisano Oct 12, 2022
519c3ae
Fix link to command line install section
ElPaisano Oct 12, 2022
9b73681
fix links to install subsections and concepts glossary
ElPaisano Oct 12, 2022
3618641
Fix want-list link and format headers
ElPaisano Oct 14, 2022
d7aac93
Fix links to ipfs add and ipfs pin
ElPaisano Oct 14, 2022
1ea2ca5
use md extension in path for links
ElPaisano Oct 14, 2022
1b02807
fix relay, merkle-dag, unixfs links
ElPaisano Oct 14, 2022
398b050
Fix various broken links on libp2p page
ElPaisano Oct 17, 2022
0faae21
Fix links on nodes page
ElPaisano Oct 17, 2022
a447aec
Fix link to manually connecting node a to node b
ElPaisano Oct 17, 2022
c6b434c
fix link to catch all and pwa spa support
ElPaisano Oct 17, 2022
2ffff17
accidentally added a python swp file, this should not be here
ElPaisano Oct 17, 2022
2a2f0d2
Merge branch 'ipfs:main' into patch2
ElPaisano Oct 18, 2022
79facad
Address point 2 in https://github.com/ipfs/ipfs-docs/pull/1306#issuec…
ElPaisano Oct 18, 2022
100016d
Address point 3 in https://github.com/ipfs/ipfs-docs/pull/1306#issuec…
ElPaisano Oct 18, 2022
74694b4
Remove shot eth link
ElPaisano Oct 18, 2022
3144938
Update browserify link
ElPaisano Oct 18, 2022
dc6ce33
Update link to chat app live demo
ElPaisano Oct 18, 2022
bbad053
Link to Kubo 050 changelog instead of missing blog
ElPaisano Oct 18, 2022
7ef044c
Merge branch 'ipfs:main' into patch2
ElPaisano Nov 14, 2022
b8b7c7e
Fix links to kubo cli install and glossary that Tmo found plus the an…
ElPaisano Nov 14, 2022
197dd98
Re-change links to LibP2P docs back to what they were originally per …
ElPaisano Nov 15, 2022
484e42f
Merge branch 'ipfs:main' into patch2
ElPaisano Nov 29, 2022
7f54aa2
Update links now that Doks theme is merged
ElPaisano Nov 29, 2022
88eea40
Fix links per https://github.com/ipfs/ipfs-docs/pull/1306#issuecommen…
ElPaisano Nov 29, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
repos:
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.32.2
hooks:
- id: markdownlint

repos:
- repo: https://github.com/tcort/markdown-link-check
rev: v3.10.3
hooks:
- id: markdown-link-check
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
- [Static-site generator](#static-site-generator)
- [Automated deployments](#automated-deployments)
- [Translation](#translation)
- [Core members](#core-members)
- [Core members](#primary-maintainers)
- [License](#license)
<!-- /TOC -->

Expand Down Expand Up @@ -149,7 +149,7 @@ Please stay tuned on the steps to translate the documentation.
- [@johnnymatthews](https://github.com/johnnymatthews): Project leadership & organization
- [@cwaring](https://github.com/cwaring): Development support
- [@2color](https://github.com/2color): Developer relations & technical writing(ecosystem)
- [@DannyS03](https://github.com/DannyS03}: Technical writing(engineering)
- [@DannyS03](https://github.com/DannyS03): Technical writing(engineering)
- [@jennijuju](https://github.com/jennijuju): Management and supervision

## License
Expand Down
2 changes: 1 addition & 1 deletion docs/basics/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Have an idea of what IPFS is but haven't really used it before? You should start

![An IPFS daemon running in a terminal window.](./images/ipfs-command-line.png)

If you're a bit more serious about IPFS and want to start poking around the command-line interface (CLI), then this section is for you. No buttons or images here; [just good-old-fashioned CLI interfaces and pipeable commands →](./command-line.md)
If you're a bit more serious about IPFS and want to start poking around the command-line interface (CLI), then this section is for you. No buttons or images here; [just good-old-fashioned CLI interfaces and pipeable commands →](../install/command-line.md)

## Other implementations

Expand Down
4 changes: 2 additions & 2 deletions docs/basics/desktop-app.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,12 @@ description: "A simple walkthrough of the basic functions of the IPFS desktop ap
This guide will walk you through the basics of IPFS Desktop and teach you how to add, remove, and download a file using IPFS. This guide will only cover the basics and will avoid talking about more complex concepts.

:::tip Use the glossary
Some of these terms might be unfamiliar to you, and that's ok! Just check out the [glossary](../concepts/glossary/)! There you'll find definitions of all the common terms used when talking about IPFS.
Some of these terms might be unfamiliar to you, and that's ok! Just check out the [glossary](../concepts/glossary.md)! There you'll find definitions of all the common terms used when talking about IPFS.
:::

## Install IPFS Desktop

Installation instructions for [macOS](../install/ipfs-desktop/#macos), [Ubuntu](../install/ipfs-desktop/#ubuntu), and [Windows](../install/ipfs-desktop/#windows).
Installation instructions for [macOS](../install/ipfs-desktop.md#macos), [Ubuntu](../install/ipfs-desktop.md#ubuntu), and [Windows](../install/ipfs-desktop.md#windows).

The installation guides linked above are straightforward and easy to follow; simply follow the instructions that correspond to your operating system, and you will have IPFS Desktop going in just a few minutes.

Expand Down
6 changes: 3 additions & 3 deletions docs/basics/go/command-line.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,13 @@ description: "A simple walkthrough of how to perform basic IPFS operations using

This short guide aims to walk you through the **basics of using IPFS with the Kubo CLI**. Kubo is [one of multiple IPFS implementations](../ipfs-implementations.md). It is the oldest IPFS implementation and exposes a CLI (among other things).

You will learn how to add, retrieve, read, and remove files within the CLI. If you are unsure about the meaning of some terms, you can check out the [glossary](../concepts/glossary.md).
You will learn how to add, retrieve, read, and remove files within the CLI. If you are unsure about the meaning of some terms, you can check out the [glossary](../../concepts/glossary.md).

All instructions and examples shown here were performed and tested on an M1 Mac. However, the IPFS commands are the same on Linux, macOS, and Windows. You will need to know how to navigate your computer's directories from within the CLI. If you're unsure how to use the CLI, we recommend learning how before continuing with this guide.

## Install Kubo

Next up, we need to install Kubo for the command-line. We have a great guide that will walk you through how to [install Kubo with the CLI](../install/command-line.md).
Next up, we need to install Kubo for the command-line. We have a great guide that will walk you through how to [install Kubo with the CLI](../../install/command-line.md).

Once you have Kubo installed, we need to get our node up and running. If this is your first time using Kubo, you will first need to initialize the configuration files:

Expand Down Expand Up @@ -191,7 +191,7 @@ We can _pin_ data we want to save to our IPFS node to ensure we don't lose this
pinned bafybeif2ewg3nqa33mjokpxii36jj2ywfqjpy3urdh7v6vqyfjoocvgy3a recursively
```

By default, objects that you retrieve over IPFS are not pinned to your node. If you wish to prevent the files from being garbage collected, you need to pin them. You will notice that the pin you just added is a `recursive` pin, meaning it is a directory containing other objects. Check out the [Pinning page to learn more about how this works](../concepts/persistence.md).
By default, objects that you retrieve over IPFS are not pinned to your node. If you wish to prevent the files from being garbage collected, you need to pin them. You will notice that the pin you just added is a `recursive` pin, meaning it is a directory containing other objects. Check out the [Pinning page to learn more about how this works](../../concepts/persistence.md).

## Remove a file

Expand Down
12 changes: 8 additions & 4 deletions docs/concepts/bitswap.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ related:

# Bitswap

Bitswap is a core module of IPFS for exchanging blocks of data. It directs the requesting and sending of blocks to and from other peers in the network. Bitswap is a _message-based protocol_ where all messages contain [want-lists](#want-list) or blocks. Bitswap has a [Go implementation](https://github.com/ipfs/go-bitswap) and a [JavaScript implementation](https://github.com/ipfs/js-ipfs-bitswap).
Bitswap is a core module of IPFS for exchanging blocks of data. It directs the requesting and sending of blocks to and from other peers in the network. Bitswap is a _message-based protocol_ where all messages contain [want-lists](#want-lists) or blocks. Bitswap has a [Go implementation](https://github.com/ipfs/go-bitswap) and a [JavaScript implementation](https://github.com/ipfs/js-ipfs-bitswap).

Bitswap has two main jobs:

Expand All @@ -19,7 +19,11 @@ Bitswap has two main jobs:

## How Bitswap works

IPFS breaks up files into chunks of data called _blocks_. These blocks are identified by a [content identifier (CID)](/concepts/content-addressing). When nodes running the Bitswap protocol want to fetch a file, they send out `want-lists` to other peers. A `want-list` is a list of CIDs for blocks a peer wants to receive. Each node remembers which blocks its peers want. Each time a node receives a block, it checks if any of its peers want the block and sends it to them if they do.
IPFS breaks up files into chunks of data called _blocks_. These blocks are identified by a [content identifier (CID)](/concepts/content-addressing.md).

### Want Lists

When nodes running the Bitswap protocol want to fetch a file, they send out `want-lists` to other peers. A `want-list` is a list of CIDs for blocks a peer wants to receive. Each node remembers which blocks its peers want. Each time a node receives a block, it checks if any of its peers want the block and sends it to them if they do.

Here is a simplifed version of a `want-list`:

Expand All @@ -31,13 +35,13 @@ Want-list {
}
```

#### Discovery
### Discovery

To find peers that have a file, a node running the Bitswap protocol first sends a request called a _want-have_ to all the peers it is connected to. This _want-have_ request contains the CID of the root block of the file (the root block is at the top of the DAG of blocks that make up the file). Peers that have the root block send a _have_ response and are added to a session. Peers that don't have the block send a _dont-have_ response. Bitswap builds up a map of which nodes have and don't have each block.

![Diagram of the _want-have/want-block_ process.](./images/bitswap/diagram-of-the-want-have-want-block-process.png =740x537)

#### Transfer
### Transfer

Bitswap sends _want-block_ to peers that have the block, and they respond with the block itself. If none of the peers have the root block, Bitswap queries the Distributed Hash Table (DHT) to ask who can provide the root block.

Expand Down
4 changes: 2 additions & 2 deletions docs/concepts/case-study-arbol.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,13 +80,13 @@ Arbol's end users enjoy the "it just works" benefits of parametric protection, b

4. **Compression:** This step is the final one before data is imported to IPFS. Arbol compresses each file to save on disk space and reduce sync time.

5. **Hashing:** Arbol uses the stock IPFS recursive add operation ([`ipfs add -r`](./reference/kubo/cli/#ipfs-add)) for hashing, as well as the experimental `no-copy` feature. This feature cuts down on disk space used by the hashing node, especially on the initial build of the dataset. Without it, an entire dataset would be copied into the local IPFS datastore directory. This can create problems, since the default flat file system datastore (`flatfs`) can start to run out of index nodes (the software representation of disk locations) after a few million files, leading to hashing failure. Arbol is also experimenting with [Badger](https://github.com/ipfs/kubo/releases/tag/v0.5.0), an alternative to flat file storage, in collaboration with the IPFS core team as the core team considers incorporating this change into IPFS itself.
5. **Hashing:** Arbol uses the stock IPFS recursive add operation ([`ipfs add -r`](../reference/kubo/cli.md#ipfs-add)) for hashing, as well as the experimental `no-copy` feature. This feature cuts down on disk space used by the hashing node, especially on the initial build of the dataset. Without it, an entire dataset would be copied into the local IPFS datastore directory. This can create problems, since the default flat file system datastore (`flatfs`) can start to run out of index nodes (the software representation of disk locations) after a few million files, leading to hashing failure. Arbol is also experimenting with [Badger](https://github.com/ipfs/kubo/releases/tag/v0.5.0), an alternative to flat file storage, in collaboration with the IPFS core team as the core team considers incorporating this change into IPFS itself.

6. **Verification:** To ensure no errors were introduced to files during the parsing stage, queries are made to the source data files and compared against the results of an identical query made to the parsed, hashed data.

7. **Publishing:** Once a hash has been verified, it is posted to Arbol's master heads reference file, and is at this point accessible via Arbol's gateway and available for use in contracts.

8. **Pinning and syncing:** When storage nodes in the Arbol network detect that a new hash has been added to the heads file, they run the standard, recursive [`ipfs pin -r`](./reference/kubo/cli.md#ipfs-pin) command on it. Arbol's primary active nodes don't need to be large in number: The network includes a single [gateway node](ipfs-gateway.md) that bootstraps with all the parsing/hashing nodes, and a few large storage nodes that serve as the primary data storage backup. However, data is also regularly synced with "cold nodes" — archival storage nodes that are mostly kept offline — as well as on individual IPFS nodes on Arbol's developers' and agronomists' personal computers.
8. **Pinning and syncing:** When storage nodes in the Arbol network detect that a new hash has been added to the heads file, they run the standard, recursive [`ipfs pin -r`](../reference/kubo/cli.md#ipfs-pin) command on it. Arbol's primary active nodes don't need to be large in number: The network includes a single [gateway node](ipfs-gateway.md) that bootstraps with all the parsing/hashing nodes, and a few large storage nodes that serve as the primary data storage backup. However, data is also regularly synced with "cold nodes" — archival storage nodes that are mostly kept offline — as well as on individual IPFS nodes on Arbol's developers' and agronomists' personal computers.

9. **Garbage collection:** Some older Arbol datasets require [garbage collection](glossary.md#garbage-collection) whenever new data is added, due to a legacy method of overwriting old hashes with new hashes. However, all of Arbol's newer datasets use an architecture where old hashes are preserved and new posts reference the previous post. This methodology creates a linked list of hashes, with each hash containing a reference to the previous hash. As the length of the list becomes computationally burdensome, the system consolidates intermediate nodes and adds a new route to the head, creating a [DAG (directed acyclic graph)](merkle-dag.md) structure. Heads are always stored in a master [heads.json reference file](https://gateway.arbolmarket.com/climate/hashes/heads.json) located on Arbol's command server.

Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/case-study-audius.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ One other key ingredient in Audius' decision to initially adopt IPFS was its sep

As a large user of the IPFS network, Audius has taken advantage of the [official IPFS forums](https://discuss.ipfs.tech), as well as support provided directly from the core IPFS development team. They are particularly impressed with the level of support and third-party tools that are available on IPFS.

"We think about the IPFS and Filecoin community as a great role model for what we are doing with the community around Audius, in terms of activity and robustness," says Nagaraj. "There are a lot of developers who are constantly contributing to IPFS. A few post on websites like [simpleaswater.com](http://www.simpleaswater.com) with tons of examples of what you can do with IPFS, how to actually implement it, breaking down all the details. We would aim for something like that. It would be incredible for us if we could reach that level of community participation." Nagaraj also calls out as particularly helpful blog posts and other content created by third-party contributors to the codebase, as well as the ecosystem that is developing around IPFS collaborators such as [Textile](http://textile.io/) and [Pinata](https://pinata.cloud/). Having such an active community around an open-source project adds to the momentum and progress of IPFS as a whole.
"We think about the IPFS and Filecoin community as a great role model for what we are doing with the community around Audius, in terms of activity and robustness," says Nagaraj, who calls out blog posts and other content created by third-party contributors to the codebase as particularly helpful, as well as the ecosystem that is developing around IPFS collaborators such as [Textile](http://textile.io/) and [Pinata](https://pinata.cloud/). Having such an active community around an open-source project adds to the momentum and progress of IPFS as a whole.

## IPFS benefits

Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/case-study-likecoin.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Content shared within Liker Land is stored and delivered using IPFS's distribute
_&mdash; Chung Wu, chief researcher, LikeCoin_
:::

The end-user workflow is simple and intuitive: Content creators, curators, and consumers take part in the LikeCoin ecosystem by using the free [Liker Land](https://liker.land/getapp) app, a reader and wallet for engaging with content. LikeCoin also offers browser extensions for [Chromium](https://chrome.google.com/webstore/detail/liker-land/cjjcemdmkddjbofomfgjedpiifpgkjhe?hl=en) (Chrome and Brave) and [Firefox](https://addons.mozilla.org/en-US/firefox/addon/liker-land/?src=search), so users can add material to their Liker Land reading lists on the fly. Outside the Liker Land app, creators can collect likes directly from WordPress, Medium, and other common blogging platforms using an easy-to-implement LikeCoin button plugin.
The end-user workflow is simple and intuitive: Content creators, curators, and consumers take part in the LikeCoin ecosystem by using the free [Liker Land](https://liker.land/getapp) app, a reader and wallet for engaging with content. LikeCoin also offers a browser extension for [Chromium](https://chrome.google.com/webstore/detail/liker-land/cjjcemdmkddjbofomfgjedpiifpgkjhe?hl=en) (Chrome and Brave), so users can add material to their Liker Land reading lists on the fly. Outside the Liker Land app, creators can collect likes directly from WordPress, Medium, and other common blogging platforms using an easy-to-implement LikeCoin button plugin.

As a "free republic" of content creators, curators and publishers, and consumers, Liker Land also operates as a decentralized autonomous organization (DAO) with a [Cosmos](https://cosmos.network/)-based bonded proof-of-stake mechanism in which every "citizen" of Liker Land participates in blockchain governance. Acquiring more LikeCoin increases a user's voting power in Liker Land. While taking part in Liker Land is free of charge, users can also become Civic Likers — the Liker Land equivalent of "taxpayers" — for a flat monthly rate, acting as ongoing supporters whose contributions fund creators.

Expand Down
6 changes: 3 additions & 3 deletions docs/concepts/case-study-snapshot.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,9 +93,9 @@ These voting systems are used to calculate the results of a vote based on the vo

## How Snapshot uses IPFS

Snapshot uses IPFS to make the whole voting process fully transparent and auditable. Every space, proposal, vote, and user action is added to IPFS and has a [content identifier (CID)](/concepts/content-addressing/).
Snapshot uses IPFS to make the whole voting process fully transparent and auditable. Every space, proposal, vote, and user action is added to IPFS and has a [content identifier (CID)](/concepts/content-addressing.md).

Additionally, the Snapshot UI is also [available on IPFS](https://bafybeihzjoqahhgrhnsksyfubnlmjvkt66aliodeicywwtofodeuo2icde.ipfs.dweb.link/) and linked using the ENS name `shot.eth` which is accessible via any ENS resolution service, e.g. [shot.eth.limo](https://shot.eth.limo/), and [shot.eth.link](https://shot.eth.link/) (see the `x-ipfs-path` and `X-Ipfs-Roots` headers when making an HTTP request.)
Additionally, the Snapshot UI is also [available on IPFS](https://bafybeihzjoqahhgrhnsksyfubnlmjvkt66aliodeicywwtofodeuo2icde.ipfs.dweb.link/) and linked using the ENS name `shot.eth` which is accessible via any ENS resolution service, e.g. [shot.eth.limo](https://shot.eth.limo/)(see the `x-ipfs-path` and `X-Ipfs-Roots` headers when making an HTTP request.)

To understand how Snapshot uses IPFS, it's useful to understand how the whole architecture was designed. Snapshot is a hybrid app combining design patterns common to Web2 and Web3 apps, and is based on the three-tier architecture:

Expand All @@ -115,7 +115,7 @@ pineapple.js exposes a `pin` method that takes a JSON object and sends it to the

### Open access via IPFS Gateways

After data is added to the IPFS network via pinning services, it is also made available for viewing by users via an [IPFS Gateway](/concepts/ipfs-gateway/). Links to the signed messages for [proposals](https://snapshot.mypinata.cloud/ipfs/bafkreigva2y23hnepirhvup2widmawmjiih3kvvuaph3a7mrivkiqcvuki) and [votes](https://snapshot.mypinata.cloud/ipfs/bafkreibozdzgw5y5piburro6pxspw7yjcdaymj3fyqjl2rohsthnqfwc6e) are integrated into the Snapshot UI.
After data is added to the IPFS network via pinning services, it is also made available for viewing by users via an [IPFS Gateway](/concepts/ipfs-gateway.md). Links to the signed messages for [proposals](https://snapshot.mypinata.cloud/ipfs/bafkreigva2y23hnepirhvup2widmawmjiih3kvvuaph3a7mrivkiqcvuki) and [votes](https://snapshot.mypinata.cloud/ipfs/bafkreibozdzgw5y5piburro6pxspw7yjcdaymj3fyqjl2rohsthnqfwc6e) are integrated into the Snapshot UI.

## IPFS benefits

Expand Down
Loading