Skip to content

Commit

Permalink
Merge pull request #1185 from ipfs/feat/renaming-go-ipfs
Browse files Browse the repository at this point in the history
refactor: rename `go-ipfs` to `kubo`
See ipfs/kubo#8959
  • Loading branch information
lidel authored Jul 21, 2022
2 parents a5e30e5 + 593dbc5 commit ca6603d
Show file tree
Hide file tree
Showing 68 changed files with 6,186 additions and 5,536 deletions.
7 changes: 0 additions & 7 deletions .github/actions/latest-ipfs-tag/action.yml

This file was deleted.

File renamed without changes.
7 changes: 7 additions & 0 deletions .github/actions/latest-kubo-tag/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
name: 'Find latest Kubo tag'
outputs:
latest_tag:
description: "latest Kubo tag name"
runs:
using: 'docker'
image: 'Dockerfile'
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,16 @@
set -eu

# extract tag name from latest stable release
REPO="ipfs/go-ipfs"
LATEST_IPFS_TAG=$(curl -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/${REPO}/releases/latest" | jq --raw-output ".tag_name")
REPO="ipfs/kubo"
LATEST_IPFS_TAG=$(curl -L -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/${REPO}/releases/latest" | jq --raw-output ".tag_name")

# extract IPFS release
cd /tmp
git clone "https:/$REPO.git"
cd go-ipfs
cd kubo

# confirm tag is valid
git describe --tags "${LATEST_IPFS_TAG}"

echo "The latest IPFS tag is ${LATEST_IPFS_TAG}"
echo "The latest Kubo tag is ${LATEST_IPFS_TAG}"
echo "::set-output name=latest_tag::${LATEST_IPFS_TAG}"
2 changes: 1 addition & 1 deletion .github/actions/update-with-latest-versions/action.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name: 'Update when a new tag or a new release is available'
inputs:
latest_ipfs_tag:
description: "latest go ipfs tag"
description: "latest Kubo tag"
required: true
outputs:
updated_branch:
Expand Down
24 changes: 12 additions & 12 deletions .github/actions/update-with-latest-versions/entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,40 +4,40 @@ set -eu
BRANCH=bump-documentation-to-latest-versions
LATEST_IPFS_TAG=$INPUT_LATEST_IPFS_TAG

echo "The latest IPFS tag is ${LATEST_IPFS_TAG}"
echo "The latest Kubo tag is ${LATEST_IPFS_TAG}"

ROOT=`pwd`
git checkout -b ${BRANCH}
API_FILE=`pwd`/docs/reference/http/api.md
API_FILE="$(pwd)/docs/reference/kubo/rpc.md"


# Update http api docs and cli docs

cd tools/http-api-docs

# extract go-ipfs release tag used in http-api-docs from go.mod in this repo
CURRENT_IPFS_TAG=`grep 'github.com/ipfs/go-ipfs ' ./go.mod | awk '{print $2}'`
echo "The currently used go-ipfs tag in http-api-docs is ${CURRENT_IPFS_TAG}"
# extract kubo release tag used in http-api-docs from go.mod in this repo
CURRENT_IPFS_TAG=$(grep 'github.com/ipfs/kubo ' ./go.mod | awk '{print $2}')
echo "The currently used Kubo tag in http-api-docs is ${CURRENT_IPFS_TAG}"

# make the upgrade, if newer go-ipfs tags exist
# make the upgrade, if newer Kubo tags exist
if [ "$CURRENT_IPFS_TAG" = "$LATEST_IPFS_TAG" ]; then
echo "http-api-docs already uses the latest go-ipfs tag."
echo "http-api-docs already uses the latest Kubo tag."
else
# update http-api-docs
sed "s/^\s*github.com\/ipfs\/go-ipfs\s\+$CURRENT_IPFS_TAG\s*$/ github.com\/ipfs\/go-ipfs $LATEST_IPFS_TAG/" go.mod > go.mod2
sed "s/^\s*github.com\/ipfs\/kubo\s\+$CURRENT_IPFS_TAG\s*$/ github.com\/ipfs\/kubo $LATEST_IPFS_TAG/" go.mod > go.mod2
mv go.mod2 go.mod
go mod tidy
make
http-api-docs > "$API_FILE"

# update cli docs
cd "$ROOT" # go back to root of ipfs-docs repo
git clone https:/ipfs/go-ipfs.git
cd go-ipfs
git clone https:/ipfs/kubo.git
cd kubo
git fetch --all --tags
git checkout "tags/$LATEST_IPFS_TAG"
go install ./cmd/ipfs
cd "$ROOT/docs/reference"
cd "$ROOT/docs/reference/kubo"
./generate-cli-docs.sh
fi

Expand All @@ -64,7 +64,7 @@ update_version() {
cd "${ROOT}"
update_version ipfs/ipfs-update current-ipfs-updater-version
update_version ipfs-cluster/ipfs-cluster current-ipfs-cluster-version
update_version ipfs/go-ipfs current-ipfs-version
update_version ipfs/kubo current-ipfs-version


# Push on change
Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/update-on-new-ipfs-tag.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ jobs:
steps:
- name: Checkout ipfs-docs
uses: actions/checkout@v2
- name: Find latest go-ipfs tag
- name: Find latest kubo tag
id: latest_ipfs
uses: ./.github/actions/latest-ipfs-tag
uses: ./.github/actions/latest-kubo-tag
- name: Update docs
id: update
uses: ./.github/actions/update-with-latest-versions
Expand All @@ -26,7 +26,7 @@ jobs:
github_token: ${{ secrets.GITHUB_TOKEN }}
source_branch: ${{ steps.update.outputs.updated_branch }}
destination_branch: "main"
pr_title: "Update documentation ${{ steps.latest_ipfs.outputs.latest_tag }}"
pr_body: "Release Notes: https:/ipfs/go-ipfs/releases/${{ steps.latest_ipfs.outputs.latest_tag }}"
pr_title: "Update release version numbers"
pr_body: "This PR was opened from update-on-new-ipfs-tag.yml workflow."
pr_label: "needs/triage,P0"

4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,9 +64,9 @@ Write everything in using the [GitHub Flavored Markdown](https://github.github.c

### Project specific titles

When referring to projects by name, use proper noun capitalization: Go-IPFS and JS-IPFS.
When referring to projects by name, use proper noun capitalization: Kubo (GO-IPFS) and JS-IPFS.

Cases inside code blocks refer to commands and are not capitalized: `go-ipfs` or `js-ipfs`.
Cases inside code blocks refer to commands and are not capitalized: `kubo` (`go-ipfs`) or `js-ipfs`.

### Style and tone

Expand Down
7 changes: 4 additions & 3 deletions docs/.vuepress/config.js
Original file line number Diff line number Diff line change
Expand Up @@ -263,10 +263,11 @@ module.exports = {
title: 'API & CLI',
path: '/reference/',
children: [
'/reference/go/api',
'/reference/http/gateway',
'/reference/js/api',
'/reference/http/api',
'/reference/cli'
'/reference/go/api',
'/reference/kubo/cli',
'/reference/kubo/rpc'
]
},
{
Expand Down
5 changes: 4 additions & 1 deletion docs/.vuepress/redirects
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,10 @@
/recent-releases/go-ipfs-0-7/install/ /install/recent-releases
/recent-releases/go-ipfs-0-7/update-procedure/ /install/recent-releases
/reference/api/ /reference
/reference/api/cli/ /reference/cli
/reference/api/cli/ /reference/kubo/cli
/reference/cli/ /reference/kubo/cli
/reference/kubo/ /reference
/reference/http/ /reference/http/api
/reference/api/http/ /reference/http/api
/reference/go/overview/ /reference/go/api
/reference/js/overview/ /reference/js/api
11 changes: 11 additions & 0 deletions docs/.vuepress/theme/components/Page.vue
Original file line number Diff line number Diff line change
Expand Up @@ -54,10 +54,21 @@ export default {
return root.scrollHeight < 15000
? root.classList.add('smooth-scroll')
: root.classList.remove('smooth-scroll')
},
advancedRedirect: async function () {
// Advanced redirect that is aware of URL #hash
const url = window.location.href
// https:/ipfs/ipfs-docs/pull/1185
if (url.includes('/reference/http/api')) {
if (window.location.hash.startsWith('#api-v0')) {
window.location.replace(url.replace('/reference/http/api','/reference/kubo/rpc'))
}
}
}
},
mounted: function () {
this.smoothScroll()
this.advancedRedirect()
},
updated: function () {
this.smoothScroll()
Expand Down
2 changes: 1 addition & 1 deletion docs/basics/command-line.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ This will output something like:

```plaintext
Initializing daemon...
go-ipfs version: 0.12.0
Kubo version: 0.12.0
Repo version: 12
System version: arm64/darwin
[...]
Expand Down
4 changes: 2 additions & 2 deletions docs/community/contribute/grammar-formatting-and-style.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,9 @@ If you have to use an acronym, spell the full phrase first and include the acron
### Project specific titles

When referring to projects by name, use proper noun capitalization: Go-IPFS and JS-IPFS.
When referring to projects by name, use proper noun capitalization: Kubo and JS-IPFS.

Cases inside code blocks refer to commands and are not capitalized: `go-ipfs` or `js-ipfs`.
Cases inside code blocks refer to commands and are not capitalized: `kubo` or `js-ipfs`.

### _Using_ IPFS, not _on_ IPFS

Expand Down
2 changes: 1 addition & 1 deletion docs/community/contribute/ways-to-contribute.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ IPFS and its sister-projects are big, with lots of code written in multiple lang

The biggest and most active repositories we have today are:

- [ipfs/go-ipfs](https:/ipfs/go-ipfs)
- [ipfs/kubo](https:/ipfs/kubo)
- [ipfs/js-ipfs](https:/ipfs/js-ipfs)
- [libp2p/go-libp2p](https:/libp2p/go-libp2p)
- [libp2p/js-libp2p](https:/libp2p/js-libp2p)
Expand Down
6 changes: 3 additions & 3 deletions docs/concepts/case-study-arbol.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,21 +80,21 @@ Arbol's end users enjoy the "it just works" benefits of parametric protection, b

4. **Compression:** This step is the final one before data is imported to IPFS. Arbol compresses each file to save on disk space and reduce sync time.

5. **Hashing:** Arbol uses the stock IPFS recursive add operation ([`ipfs add -r`](./reference/cli/#ipfs-add)) for hashing, as well as the experimental `no-copy` feature. This feature cuts down on disk space used by the hashing node, especially on the initial build of the dataset. Without it, an entire dataset would be copied into the local IPFS datastore directory. This can create problems, since the default flat file system datastore (`flatfs`) can start to run out of index nodes (the software representation of disk locations) after a few million files, leading to hashing failure. Arbol is also experimenting with [Badger](https:/ipfs/go-ipfs/releases/tag/v0.5.0), an alternative to flat file storage, in collaboration with the IPFS core team as the core team considers incorporating this change into IPFS itself.
5. **Hashing:** Arbol uses the stock IPFS recursive add operation ([`ipfs add -r`](./reference/kubo/cli/#ipfs-add)) for hashing, as well as the experimental `no-copy` feature. This feature cuts down on disk space used by the hashing node, especially on the initial build of the dataset. Without it, an entire dataset would be copied into the local IPFS datastore directory. This can create problems, since the default flat file system datastore (`flatfs`) can start to run out of index nodes (the software representation of disk locations) after a few million files, leading to hashing failure. Arbol is also experimenting with [Badger](https:/ipfs/kubo/releases/tag/v0.5.0), an alternative to flat file storage, in collaboration with the IPFS core team as the core team considers incorporating this change into IPFS itself.

6. **Verification:** To ensure no errors were introduced to files during the parsing stage, queries are made to the source data files and compared against the results of an identical query made to the parsed, hashed data.

7. **Publishing:** Once a hash has been verified, it is posted to Arbol's master heads reference file, and is at this point accessible via Arbol's gateway and available for use in contracts.

8. **Pinning and syncing:** When storage nodes in the Arbol network detect that a new hash has been added to the heads file, they run the standard, recursive [`ipfs pin -r`](./reference/cli.md#ipfs-pin) command on it. Arbol's primary active nodes don't need to be large in number: The network includes a single [gateway node](ipfs-gateway.md) that bootstraps with all the parsing/hashing nodes, and a few large storage nodes that serve as the primary data storage backup. However, data is also regularly synced with "cold nodes" — archival storage nodes that are mostly kept offline — as well as on individual IPFS nodes on Arbol's developers' and agronomists' personal computers.
8. **Pinning and syncing:** When storage nodes in the Arbol network detect that a new hash has been added to the heads file, they run the standard, recursive [`ipfs pin -r`](./reference/kubo/cli.md#ipfs-pin) command on it. Arbol's primary active nodes don't need to be large in number: The network includes a single [gateway node](ipfs-gateway.md) that bootstraps with all the parsing/hashing nodes, and a few large storage nodes that serve as the primary data storage backup. However, data is also regularly synced with "cold nodes" — archival storage nodes that are mostly kept offline — as well as on individual IPFS nodes on Arbol's developers' and agronomists' personal computers.

9. **Garbage collection:** Some older Arbol datasets require [garbage collection](glossary.md#garbage-collection) whenever new data is added, due to a legacy method of overwriting old hashes with new hashes. However, all of Arbol's newer datasets use an architecture where old hashes are preserved and new posts reference the previous post. This methodology creates a linked list of hashes, with each hash containing a reference to the previous hash. As the length of the list becomes computationally burdensome, the system consolidates intermediate nodes and adds a new route to the head, creating a [DAG (directed acyclic graph)](merkle-dag.md) structure. Heads are always stored in a master [heads.json reference file](https://gateway.arbolmarket.com/climate/hashes/heads.json) located on Arbol's command server.

### The tooling

![Arbol high-level architecture](./images/case-studies/img-arbol-arch.svg)

In addition to out-of-the-box [`go-ipfs`](https:/ipfs/go-ipfs), Arbol relies heavily on custom written libraries and a number of weather-specialized Python libraries such as [netCDF4](https://pypi.org/project/netCDF4/) (an interface to netCDF, a self-describing format for array-oriented data) and [rasterio](https://pypi.org/project/rasterio) (for geospatial raster data). Additionally, Docker and Digital Ocean are important tools in Arbol's box for continuous integration and deployment.
In addition to out-of-the-box [`kubo`](https:/ipfs/kubo), Arbol relies heavily on custom written libraries and a number of weather-specialized Python libraries such as [netCDF4](https://pypi.org/project/netCDF4/) (an interface to netCDF, a self-describing format for array-oriented data) and [rasterio](https://pypi.org/project/rasterio) (for geospatial raster data). Additionally, Docker and Digital Ocean are important tools in Arbol's box for continuous integration and deployment.

As described above, Arbol datasets are ingested and augmented via either push or pull. For pulling data, Arbol uses a command server to query dataset release pages for new content. When new data is found, the command server spins up a Digital Ocean droplet (a Linux-based virtual machine) and deploys a "parse-interpret-compress-hash-verify" Docker container to it. This is done using a custom-built library that Arbol describes as "homebrew Lambda." Because Amazon's Lambda serverless compute has disk storage, CPU, and RAM limitations that make it unsuitable for the scale and complexity of Arbol's pipeline, the team has created their own tool.

Expand Down
6 changes: 3 additions & 3 deletions docs/concepts/case-study-audius.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,15 +94,15 @@ IPFS has provided Audius the full benefits of decentralized storage with no hass

## How Audius uses IPFS

All files and metadata on Audius are _shared_ using IPFS by creator node services, _registered_ on Audius smart contracts, _indexed_ by discovery services, and _served_ through the client to end users. Audius runs nodes internally to test new changes, and there are a dozen public hosts running nodes for specific services and geographies. However, content creators and listeners don’t need to know anything about the back end; they use the Audius client and client libraries to upload and stream audio. Each IPFS node within the Audius network is currently a [`go-ipfs`](https:/ipfs/go-ipfs) container co-located with service logic. Audius implements the services interface with `go-ipfs` using [`py-ipfs-api`](https:/ipfs-shipyard/py-ipfs-http-client) or [`ipfs-http-client`](https:/ipfs/js-ipfs/tree/master/packages/ipfs-http-client) (JavaScript) to perform read and write operations.
All files and metadata on Audius are _shared_ using IPFS by creator node services, _registered_ on Audius smart contracts, _indexed_ by discovery services, and _served_ through the client to end users. Audius runs nodes internally to test new changes, and there are a dozen public hosts running nodes for specific services and geographies. However, content creators and listeners don’t need to know anything about the back end; they use the Audius client and client libraries to upload and stream audio. Each IPFS node within the Audius network is currently a [`kubo`](https:/ipfs/kubo) container co-located with service logic. Audius implements the services interface with `kubo` using [`py-ipfs-api`](https:/ipfs-shipyard/py-ipfs-http-client) or [`ipfs-http-client`](https:/ipfs/js-ipfs/tree/master/packages/ipfs-http-client) (JavaScript) to perform read and write operations.

### The tooling

Audius uses the following IPFS implementations with no modification:

- **IPFS core**
- [`go-ipfs`](https:/ipfs/go-ipfs)
- _All individual nodes are `go-ipfs` containers_
- [`kubo`](https:/ipfs/kubo)
- _All individual nodes are `kubo` containers_
- [`py-ipfs-api`](https:/ipfs-shipyard/py-ipfs-http-client)
- _Discovery provider is a Python application_
- _Python application uses a Flask server + Celery worker queue + PostgreSQL database_
Expand Down
Loading

0 comments on commit ca6603d

Please sign in to comment.