Skip to content

Commit

Permalink
Markdown fixes to fix rendering on hub (dbt-labs#336)
Browse files Browse the repository at this point in the history
  • Loading branch information
coapacetic authored and clrcrl committed May 18, 2021
1 parent df1f9e1 commit c4216bc
Show file tree
Hide file tree
Showing 2 changed files with 53 additions and 40 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
* Fix `cardinality_equality` test when the two tables' column names differed ([#334](https:/fishtown-analytics/dbt-utils/pull/334)) [@joellabes](https:/joellabes)

## Under the hood
* Fix Markdown formatting for hub rendering ([#336](https:/fishtown-analytics/dbt-utils/issues/350), [@coapacetic](https:/coapacetic))

# dbt-utils v0.6.4

Expand Down
92 changes: 52 additions & 40 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,40 +14,39 @@ Most date macros are not supported on postgres.
#### current_timestamp ([source](macros/cross_db_utils/current_timestamp.sql))
This macro returns the current timestamp.

Usage:
**Usage:**
```
{{ dbt_utils.current_timestamp() }}
```

#### dateadd ([source](macros/cross_db_utils/dateadd.sql))
This macro adds a time/day interval to the supplied date/timestamp. Note: The `datepart` argument is database-specific.

Usage:
**Usage:**
```
{{ dbt_utils.dateadd(datepart='day', interval=1, from_date_or_timestamp="'2017-01-01'") }}
```

#### datediff ([source](macros/cross_db_utils/datediff.sql))
This macro calculates the difference between two dates.

Usage:
**Usage:**
```
{{ dbt_utils.datediff("'2018-01-01'", "'2018-01-20'", 'day') }}
```


#### split_part ([source](macros/cross_db_utils/split_part.sql))
This macro splits a string of text using the supplied delimiter and returns the supplied part number (1-indexed).

Usage:
**Usage:**
```
{{ dbt_utils.split_part(string_text='1,2,3', delimiter_text=',', part_number=1) }}
```

#### date_trunc ([source](macros/cross_db_utils/date_trunc.sql))
Truncates a date or timestamp to the specified datepart. Note: The `datepart` argument is database-specific.

Usage:
**Usage:**
```
{{ dbt_utils.date_trunc(datepart, date) }}
```
Expand All @@ -58,7 +57,7 @@ Gets the last day for a given date and datepart. Notes:
- The `datepart` argument is database-specific.
- This macro currently only supports dateparts of `month` and `quarter`.

Usage:
**Usage:**
```
{{ dbt_utils.last_day(date, datepart) }}
```
Expand Down Expand Up @@ -87,17 +86,19 @@ Notes:
`num_buckets + 1` if the expression is greater than or equal to max_value.


Usage:
**Usage:**
```
{{ dbt_utils.width_bucket(expr, min_value, max_value, num_buckets) }}
```

---

### Date/Time
#### date_spine ([source](macros/datetime/date_spine.sql))
This macro returns the sql required to build a date spine. The spine will include the `start_date` (if it is aligned to the `datepart`), but it will not include the `end_date`.

Usage:
**Usage:**

```
{{ dbt_utils.date_spine(
datepart="day",
Expand All @@ -106,7 +107,9 @@ Usage:
)
}}
```

---

### Geo
#### haversine_distance ([source](macros/geo/haversine_distance.sql))
This macro calculates the [haversine distance](http://daynebatten.com/2015/09/latitude-longitude-distance-sql/) between a pair of x/y coordinates.
Expand All @@ -123,7 +126,7 @@ Optionally takes a `unit` string parameter ('km' or 'mi') which defaults to mile
#### equal_rowcount ([source](macros/schema_tests/equal_rowcount.sql))
This schema test asserts the that two relations have the same number of rows.

Usage:
**Usage:**
```yaml
version: 2

Expand All @@ -138,7 +141,7 @@ models:
#### equality ([source](macros/schema_tests/equality.sql))
This schema test asserts the equality of two relations. Optionally specify a subset of columns to compare.

Usage:
**Usage:**
```yaml
version: 2

Expand All @@ -156,7 +159,7 @@ models:
#### expression_is_true ([source](macros/schema_tests/expression_is_true.sql))
This schema test asserts that a valid sql expression is true for all records. This is useful when checking integrity across columns, for example, that a total is equal to the sum of its parts, or that at least one column is true.

Usage:
**Usage:**
```yaml
version: 2

Expand All @@ -171,7 +174,8 @@ models:
The macro accepts an optional parameter `condition` that allows for asserting
the `expression` on a subset of all records.

Usage:
**Usage:**

```yaml
version: 2

Expand Down Expand Up @@ -207,7 +211,7 @@ models:
#### recency ([source](macros/schema_tests/recency.sql))
This schema test asserts that there is data in the referenced model at least as recent as the defined interval prior to the current timestamp.

Usage:
**Usage:**
```yaml
version: 2

Expand All @@ -223,7 +227,7 @@ models:
#### at_least_one ([source](macros/schema_tests/at_least_one.sql))
This schema test asserts if column has at least one value.
Usage:
**Usage:**
```yaml
version: 2

Expand All @@ -240,7 +244,7 @@ models:
#### not_constant ([source](macros/schema_tests/not_constant.sql))
This schema test asserts if column does not have same value in all rows.

Usage:
**Usage:**
```yaml
version: 2

Expand All @@ -256,7 +260,7 @@ models:
#### cardinality_equality ([source](macros/schema_tests/cardinality_equality.sql))
This schema test asserts if values in a given column have exactly the same cardinality as values from a different column in a different model.

Usage:
**Usage:**
```yaml
version: 2

Expand All @@ -274,7 +278,7 @@ models:
#### unique_where ([source](macros/schema_tests/unique_where.sql))
This test validates that there are no duplicate values present in a field for a subset of rows by specifying a `where` clause.

Usage:
**Usage:**
```yaml
version: 2

Expand All @@ -290,7 +294,7 @@ models:
#### not_null_where ([source](macros/schema_tests/not_null_where.sql))
This test validates that there are no null values present in a column for a subset of rows by specifying a `where` clause.

Usage:
**Usage:**
```yaml
version: 2
Expand Down Expand Up @@ -322,7 +326,7 @@ models:
#### relationships_where ([source](macros/schema_tests/relationships_where.sql))
This test validates the referential integrity between two relations (same as the core relationships schema test) with an added predicate to filter out some rows from the test. This is useful to exclude records such as test entities, rows created in the last X minutes/hours to account for temporary gaps due to ETL limitations, etc.

Usage:
**Usage:**
```yaml
version: 2
Expand Down Expand Up @@ -557,7 +561,7 @@ models:
#### get_query_results_as_dict ([source](macros/sql/get_query_results_as_dict.sql))
This macro returns a dictionary from a sql query, so that you don't need to interact with the Agate library to operate on the result

Usage:
**Usage:**
```
-- Returns a dictionary of the users table where the state is California
{% set california_cities = dbt_utils.get_query_results_as_dict("select * from" ~ ref('cities') ~ "where state = 'CA' and city is not null ") %}
Expand All @@ -576,10 +580,11 @@ group by 1
This macro returns the unique values for a column in a given [relation](https://docs.getdbt.com/docs/writing-code-in-dbt/class-reference/#relation).
It takes an options `default` argument for compiling when the relation does not already exist.
Usage:
```
-- Returns a list of the top 50 states in the `users` table
{% set states = dbt_utils.get_column_values(table=ref('users'), column='state', max_records=50, default=[]) %}
**Usage:**
```sql
-- Returns a list of the payment_methods in the stg_payments model_
{% set payment_methods = dbt_utils.get_column_values(table=ref('stg_payments'), column='payment_method') %}
{% for state in states %}
...
Expand Down Expand Up @@ -642,10 +647,13 @@ Generate drop statements for all Relations that match a naming pattern:

#### get_relations_by_prefix ([source](macros/sql/get_relations_by_prefix.sql))
> This macro will soon be deprecated in favor of the more flexible `get_relations_by_pattern` macro (above)
Returns a list of [Relations](https://docs.getdbt.com/docs/writing-code-in-dbt/class-reference/#relation)
that match a given prefix, with an optional exclusion pattern. It's particularly
handy paired with `union_relations`.

**Usage:**

```
-- Returns a list of relations that match schema.prefix%
{% set relations = dbt_utils.get_relations_by_prefix('my_schema', 'my_prefix') %}
Expand All @@ -668,15 +676,19 @@ for relations.
#### group_by ([source](macros/sql/groupby.sql))
This macro build a group by statement for fields 1...N

Usage:
**Usage:**
```
{{ dbt_utils.group_by(n=3) }} --> group by 1,2,3
```

#### star ([source](macros/sql/star.sql))
This macro generates a list of all fields that exist in the `from` relation, excluding any fields listed in the `except` argument. The construction is identical to `select * from {{ref('my_model')}}`, replacing star (`*`) with the star macro. This macro also has an optional `relation_alias` argument that will prefix all generated fields with an alias.

Usage:
**Usage:**
```sql
select
{{ dbt_utils.star(ref('my_model')) }}
from {{ ref('my_model') }}
```
select
{{ dbt_utils.star(from=ref('my_model'), except=["exclude_field_1", "exclude_field_2"]) }}
Expand Down Expand Up @@ -712,36 +724,36 @@ the column that records the source of this row.
#### generate_series ([source](macros/sql/generate_series.sql))
This macro implements a cross-database mechanism to generate an arbitrarily long list of numbers. Specify the maximum number you'd like in your list and it will create a 1-indexed SQL result set.
Usage:
**Usage:**
```
{{ dbt_utils.generate_series(upper_bound=1000) }}
```
#### surrogate_key ([source](macros/sql/surrogate_key.sql))
Implements a cross-database way to generate a hashed surrogate key using the fields specified.
Usage:
**Usage:**
```
{{ dbt_utils.surrogate_key(['field_a', 'field_b'[,...]]) }}
```
#### safe_add ([source](macros/sql/safe_add.sql))
Implements a cross-database way to sum nullable fields using the fields specified.
Usage:
**Usage:**
```
{{ dbt_utils.safe_add('field_a', 'field_b'[,...]) }}
```
#### pivot ([source](macros/sql/pivot.sql))
This macro pivots values from rows to columns.
Usage:
**Usage:**
```
{{ dbt_utils.pivot(<column>, <list of values>) }}
```
Example:
**Example:**
Input: orders
Expand All @@ -768,7 +780,7 @@ Example:
| S | 2 | 1 |
| M | 1 | 0 |
Arguments:
**Arguments:**
- column: Column name, required
- values: List of row values to turn into columns, required
Expand All @@ -785,7 +797,7 @@ Arguments:
This macro "un-pivots" a table from wide format to long format. Functionality is similar to pandas [melt](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html) function.
Boolean values are replaced with the strings 'true'|'false'
Usage:
**Usage:**
```
{{ dbt_utils.unpivot(
relation=ref('table_name'),
Expand Down Expand Up @@ -830,23 +842,23 @@ Usage:
#### get_url_parameter ([source](macros/web/get_url_parameter.sql))
This macro extracts a url parameter from a column containing a url.
Usage:
**Usage:**
```
{{ dbt_utils.get_url_parameter(field='page_url', url_parameter='utm_source') }}
```
#### get_url_host ([source](macros/web/get_url_host.sql))
This macro extracts a hostname from a column containing a url.
Usage:
**Usage:**
```
{{ dbt_utils.get_url_host(field='page_url') }}
```
#### get_url_path ([source](macros/web/get_url_path.sql))
This macro extracts a page path from a column containing a url.
Usage:
**Usage:**
```
{{ dbt_utils.get_url_path(field='page_url') }}
```
Expand Down Expand Up @@ -893,7 +905,7 @@ Should a run of a model using this materialization be interrupted, a subsequent

Progress is logged in the command line for easy monitoring.

Usage:
**Usage:**
```sql
{{
config(
Expand All @@ -915,13 +927,13 @@ with events as (
....complex aggregates here....

```
Configuration values:
**Configuration values:**
* `period`: period to break the model into, must be a valid [datepart](https://docs.aws.amazon.com/redshift/latest/dg/r_Dateparts_for_datetime_functions.html) (default='Week')
* `timestamp_field`: the column name of the timestamp field that will be used to break the model into smaller queries
* `start_date`: literal date or timestamp - generally choose a date that is earlier than the start of your data
* `stop_date`: literal date or timestamp (default=current_timestamp)

Caveats:
**Caveats:**
* This materialization is compatible with dbt 0.10.1.
* This materialization has been written for Redshift.
* This materialization can only be used for a model where records are not expected to change after they are created.
Expand Down

0 comments on commit c4216bc

Please sign in to comment.