Skip to content

Commit

Permalink
ENG-10966: upgrade docusaurus to latest
Browse files Browse the repository at this point in the history
  • Loading branch information
pintusoliya committed Aug 20, 2024
1 parent 975a16f commit f16f9fb
Show file tree
Hide file tree
Showing 274 changed files with 3,613 additions and 3,500 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/asf-site.ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
git pull --rebase hudi asf-site
- uses: actions/setup-node@v2
with:
node-version: '16'
node-version: '18'
- name: Build website
run: |
pushd ${{ env.DOCS_ROOT }}
Expand Down
2 changes: 1 addition & 1 deletion website/blog/2020-11-11-hudi-indexing-mechanisms.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ Some interesting work underway in this area:
- Record level index implementation, as a secondary index using another Hudi table.

Going forward, this will remain an area of active investment for the project. we are always looking for contributors who can drive these roadmap items forward.
Please [engage](/contribute/get-involved) with our community if you want to get involved.
Please [engage](/community/get-involved) with our community if you want to get involved.



2 changes: 1 addition & 1 deletion website/blog/2021-02-13-hudi-key-generators.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ key generators.
| ```hoodie.datasource.write.partitionpath.field``` | Refers to partition path field. This is a mandatory field. |
| ```hoodie.datasource.write.keygenerator.class``` | Refers to Key generator class(including full path). Could refer to any of the available ones or user defined one. This is a mandatory field. |
| ```hoodie.datasource.write.partitionpath.urlencode```| When set to true, partition path will be url encoded. Default value is false. |
| ```hoodie.datasource.write.hive_style_partitioning```| When set to true, uses hive style partitioning. Partition field name will be prefixed to the value. Format: “<partition_path_field_name>=<partition_path_value>”. Default value is false.|
| ```hoodie.datasource.write.hive_style_partitioning```| When set to true, uses hive style partitioning. Partition field name will be prefixed to the value. Format: “\<partition_path_field_name\>=\<partition_path_value\>”. Default value is false.|

NOTE:
Please use `hoodie.datasource.write.keygenerator.class` instead of `hoodie.datasource.write.keygenerator.type`. The second config was introduced more recently.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ To use incremental models, you need to perform these two activities:

dbt provides you a macro `is_incremental()` which is very useful to define the filters exclusively for incremental materializations.

Often, you'll want to filter for "new" rows, as in, rows that have been created since the last time dbt ran this model. The best way to find the timestamp of the most recent run of this model is by checking the most recent timestamp in your target table. dbt makes it easy to query your target table by using the "[{{ this }}](https://docs.getdbt.com/reference/dbt-jinja-functions/this)" variable.
Often, you'll want to filter for "new" rows, as in, rows that have been created since the last time dbt ran this model. The best way to find the timestamp of the most recent run of this model is by checking the most recent timestamp in your target table. dbt makes it easy to query your target table by using the "[\{{ this }}](https://docs.getdbt.com/reference/dbt-jinja-functions/this)" variable.

```sql title="models/my_model.sql"
{{
Expand Down
2 changes: 1 addition & 1 deletion website/contribute/developer-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -335,7 +335,7 @@ Use `alt use` to use v1 version of docker-compose while running integration test
## Communication

All communication is expected to align with the [Code of Conduct](https://www.apache.org/foundation/policies/conduct).
Discussion about contributing code to Hudi happens on the [dev@ mailing list](/contribute/get-involved). Introduce yourself!
Discussion about contributing code to Hudi happens on the [dev@ mailing list](/community/get-involved). Introduce yourself!

## Code & Project Structure

Expand Down
2 changes: 1 addition & 1 deletion website/contribute/rfc-process.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Use this discussion thread to get an agreement from people on the mailing list t
1. Create a folder `rfc-<number>` under `rfc` folder, where `<number>` is replaced by the actual RFC number used.
2. Copy the rfc template file `rfc/template.md` to `rfc/rfc-<number>/rfc-<number>.md` and proceed to draft your design document.
3. [Optional] Place any images used by the same directory using the `![alt text](./image.png)` markdown syntax.
4. Add at least 2 PMC members as approvers (you can find their github usernames [here](/contribute/team)). You are free to add any number of dev members to your reviewers list.
4. Add at least 2 PMC members as approvers (you can find their github usernames [here](/community/team)). You are free to add any number of dev members to your reviewers list.
5. Raise a PR against the master branch with `[RFC-<number>]` in the title and work through feedback, until the RFC approved (by approving the Github PR itself)
6. Before landing the PR, please change the status to "IN PROGRESS" under `rfc/README.md` and keep it maintained as you go about implementing, completing or even abandoning.

Expand Down
6 changes: 3 additions & 3 deletions website/docs/cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -452,7 +452,7 @@ To manually schedule or run a compaction, use the below command. This command us
operations.
**NOTE:** Make sure no other application is scheduling compaction for this table concurrently
{: .notice--info}
\{: .notice--info}
```java
hudi:trips->help compaction schedule
Expand Down Expand Up @@ -538,7 +538,7 @@ hudi:stock_ticks_mor->compaction validate --instant 20181005222601
```
**NOTE:** The following commands must be executed without any other writer/ingestion application running.
{: .notice--warning}
\{: .notice--warning}
Sometimes, it becomes necessary to remove a fileId from a compaction-plan inorder to speed-up or unblock compaction
operation. Any new log-files that happened on this file after the compaction got scheduled will be safely renamed
Expand Down Expand Up @@ -753,4 +753,4 @@ table change-table-type COW
╟────────────────────────────────────────────────┼──────────────────────────────────────┼──────────────────────────────────────╢
║ hoodie.timeline.layout.version │ 1 │ 1 ║
╚════════════════════════════════════════════════╧══════════════════════════════════════╧══════════════════════════════════════╝
```
```
6 changes: 3 additions & 3 deletions website/docs/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,8 +159,7 @@ The available strategies are as follows:
### Update Strategy

Currently, clustering can only be scheduled for tables/partitions not receiving any concurrent updates. By default,
the config for update strategy - [`hoodie.clustering.updates.strategy`](/docs/configurations/#hoodieclusteringupdatesstrategy) is set to ***
SparkRejectUpdateStrategy***. If some file group has updates during clustering then it will reject updates and throw an
the config for update strategy - [`hoodie.clustering.updates.strategy`](/docs/configurations/#hoodieclusteringupdatesstrategy) is set to ***SparkRejectUpdateStrategy***. If some file group has updates during clustering then it will reject updates and throw an
exception. However, in some use-cases updates are very sparse and do not touch most file groups. The default strategy to
simply reject updates does not seem fair. In such use-cases, users can set the config to ***SparkAllowUpdateStrategy***.

Expand Down Expand Up @@ -270,6 +269,7 @@ whose location can be pased as `—props` when starting the Hudi Streamer (just

A sample spark-submit command to setup HoodieStreamer is as below:


```bash
spark-submit \
--class org.apache.hudi.utilities.streamer.HoodieStreamer \
Expand Down Expand Up @@ -341,4 +341,4 @@ out-of-the-box. Note that as of now only linear sort is supported in Java execut
## Related Resources
<h3>Videos</h3>

* [Understanding Clustering in Apache Hudi and the Benefits of Asynchronous Clustering](https://www.youtube.com/watch?v=R_sm4wlGXuE)
* [Understanding Clustering in Apache Hudi and the Benefits of Asynchronous Clustering](https://www.youtube.com/watch?v=R_sm4wlGXuE)
11 changes: 3 additions & 8 deletions website/docs/compaction.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,13 +53,8 @@ Hudi provides various options for both these strategies as discussed below.

| Config Name | Default | Description |
|----------------------------------------------------|-------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| hoodie.compact.inline.trigger.strategy | NUM_COMMITS (Optional) | org.apache.hudi.table.action.compact.CompactionTriggerStrategy: Controls when compaction is scheduled.<br />`Config Param: INLINE_COMPACT_TRIGGER_STRATEGY` |
Possible values: <br/><ul><li>`NUM_COMMITS`: triggers compaction when there are at least N delta commits after last
completed compaction.</li><li>`NUM_COMMITS_AFTER_LAST_REQUEST`: triggers compaction when there are at least N delta commits
after last completed or requested compaction.</li><li>`TIME_ELAPSED`: triggers compaction after N seconds since last
compaction.</li><li>`NUM_AND_TIME`: triggers compaction when both there are at least N delta commits and N seconds
elapsed (both must be satisfied) after last completed compaction.</li><li>`NUM_OR_TIME`: triggers compaction when both
there are at least N delta commits or N seconds elapsed (either condition is satisfied) after last completed compaction.</li></ul>
| hoodie.compact.inline.trigger.strategy | NUM_COMMITS (Optional) | org.apache.hudi.table.action.compact.CompactionTriggerStrategy: Controls when compaction is scheduled.<br />`Config Param: INLINE_COMPACT_TRIGGER_STRATEGY` <br/>
<ul><li>`NUM_COMMITS`: triggers compaction when there are at least N delta commits after last completed compaction.</li><li>`NUM_COMMITS_AFTER_LAST_REQUEST`: triggers compaction when there are at least N delta commits after last completed or requested compaction.</li><li>`TIME_ELAPSED`: triggers compaction after N seconds since last compaction.</li><li>`NUM_AND_TIME`: triggers compaction when both there are at least N delta commits and N seconds elapsed (both must be satisfied) after last completed compaction.</li><li>`NUM_OR_TIME`: triggers compaction when both there are at least N delta commits or N seconds elapsed (either condition is satisfied) after last completed compaction.</li></ul>|

#### Compaction Strategies
| Config Name | Default | Description |
Expand All @@ -81,7 +76,7 @@ order of creation of Hive Partitions. It helps to compact data in latest partiti
Total_IO allowed.</li><li>`UnBoundedCompactionStrategy`: UnBoundedCompactionStrategy will not change ordering or filter
any compaction. It is a pass-through and will compact all the base files which has a log file. This usually means
no-intelligence on compaction.</li><li>`UnBoundedPartitionAwareCompactionStrategy`:UnBoundedPartitionAwareCompactionStrategy is a custom UnBounded Strategy. This will filter all the partitions that
are eligible to be compacted by a {@link BoundedPartitionAwareCompactionStrategy} and return the result. This is done
are eligible to be compacted by a \{@link BoundedPartitionAwareCompactionStrategy} and return the result. This is done
so that a long running UnBoundedPartitionAwareCompactionStrategy does not step over partitions in a shorter running
BoundedPartitionAwareCompactionStrategy. Essentially, this is an inverse of the partitions chosen in
BoundedPartitionAwareCompactionStrategy</li></ul>
Expand Down
Loading

0 comments on commit f16f9fb

Please sign in to comment.