Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pin markdownlint-cli version and apply fixes #2320

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ markdown-toc: $(MARKDOWN_TOC)
done

$(MARKDOWN_LINT):
npm install markdownlint-cli
npm install markdownlint-cli@0.31.0

.PHONY: markdownlint
markdownlint: $(MARKDOWN_LINT)
Expand Down
2 changes: 1 addition & 1 deletion experimental/trace/zpages.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Statsz is focused on metrics, as it displays metrics and measures for exported v

For OpenTelemetry, a custom [span processor](../../specification/trace/sdk.md#span-processor) SHOULD be made to interface with the [Tracer API](../../specification/trace/api.md#tracer) to collect spans. This span processor collects references to running spans and exports completed spans to its own memory or to an aggregator that can aggregate information from multiple span processors.

There SHOULD be a `data aggregator` that tracks running, error, and latency buckets counts for spans grouped by their respective name. The aggregator MUST also hold some sampled spans to provide users with more information. To prevent memory overload, only some spans MUST be sampled for each bucket for each span name; for example, if that sampled span max number is set to 5, then only up to 55 pieces of span data can be kept for each span name in the aggregator (sampled_max * number of buckets = 5 * [running + error + 9 latency buckets] = 5 * 11 = 55). Aggregation work is recommended to do periodically in batches.
There SHOULD be a `data aggregator` that tracks running, error, and latency buckets counts for spans grouped by their respective name. The aggregator MUST also hold some sampled spans to provide users with more information. To prevent memory overload, only some spans MUST be sampled for each bucket for each span name; for example, if that sampled span max number is set to 5, then only up to 55 pieces of span data can be kept for each span name in the aggregator (sampled_max \* number of buckets = 5 \* [running + error + 9 latency buckets] = 5 \* 11 = 55). Aggregation work is recommended to do periodically in batches.

When the user visits the Tracez endpoint, likely something similar to `host:port/tracez`, then the distribution of latencies for span names MUST be rendered. When clicking on buckets counts for a span name, additional details on individual sampled spans for that bucket MUST be shown. These details would include trace ID, parent ID, span ID, start time, attributes, and more depending on the type of bucket (running, error, or latency) and what's implemented/recorded in the other components. See [HTTP Server](#http-server) for more information on that and other implementation details.

Expand Down
2 changes: 1 addition & 1 deletion specification/glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ Key/value pairs contained in a `Log Record`.

Logs that are recorded in a format which has a well-defined structure that allows
to differentiate between different elements of a Log Record (e.g. the Timestamp,
the Attributes, etc). The _Syslog protocol_ ([RFC 5424](https://tools.ietf.org/html/rfc5424)),
the Attributes, etc). The *Syslog protocol* ([RFC 5424](https://tools.ietf.org/html/rfc5424)),
for example, defines a `structured-data` format.

### Flat File Logs
Expand Down
14 changes: 7 additions & 7 deletions specification/metrics/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ asynchronous:
seconds. [Measurements](#measurement) recorded by asynchronous instruments
cannot be associated with the [Context](../context/context.md).

Please note that the term _synchronous_ and _asynchronous_ have nothing to do
Please note that the term *synchronous* and *asynchronous* have nothing to do
with the [asynchronous
pattern](https://en.wikipedia.org/wiki/Asynchronous_method_invocation).

Expand Down Expand Up @@ -575,13 +575,13 @@ httpServerDuration.Record(100, new HttpRequestAttributes { method = "GET", schem
### Asynchronous Gauge

Asynchronous Gauge is an [asynchronous Instrument](#asynchronous-instrument)
which reports non-additive value(s) (_e.g. the room temperature - it makes no
sense to report the temperature value from multiple rooms and sum them up_) when
which reports non-additive value(s) (e.g. the room temperature - it makes no
sense to report the temperature value from multiple rooms and sum them up) when
the instrument is being observed.

Note: if the values are additive (_e.g. the process heap size - it makes sense
Note: if the values are additive (e.g. the process heap size - it makes sense
to report the heap size from multiple processes and sum them up, so we get the
total heap usage_), use [Asynchronous Counter](#asynchronous-counter) or
total heap usage), use [Asynchronous Counter](#asynchronous-counter) or
[Asynchronous UpDownCounter](#asynchronous-updowncounter).

Example uses for Asynchronous Gauge:
Expand Down Expand Up @@ -845,9 +845,9 @@ customersInStore.Add(-1, new Account { Type = "residential" });
### Asynchronous UpDownCounter

Asynchronous UpDownCounter is an [asynchronous
Instrument](#asynchronous-instrument) which reports additive value(s) (_e.g. the
Instrument](#asynchronous-instrument) which reports additive value(s) (e.g. the
process heap size - it makes sense to report the heap size from multiple
processes and sum them up, so we get the total heap usage_) when the instrument
processes and sum them up, so we get the total heap usage) when the instrument
is being observed.

Note: if the value is
Expand Down
8 changes: 4 additions & 4 deletions specification/metrics/datamodel.md
Original file line number Diff line number Diff line change
Expand Up @@ -270,9 +270,9 @@ characteristics for a Timeseries. A metric stream is identified by:
It is possible (and likely) that more than one metric stream is created per
`Instrument` in the event model.

__Note: The same `Resource`, `name` and `Attribute`s but differing point kind
**Note: The same `Resource`, `name` and `Attribute`s but differing point kind
coming out of an OpenTelemetry SDK is considered an "error state" that SHOULD
be handled by an SDK.__
be handled by an SDK.**

A metric stream can use one of these basic point kinds, all of
which satisfy the requirements above, meaning they define a decomposable
Expand Down Expand Up @@ -599,7 +599,7 @@ reference implementations.
##### Scale Zero: Extract the Exponent

For scale zero, the index of a value equals its normalized base-2
exponent, meaning the value of _exponent_ in the base-2 fractional
exponent, meaning the value of *exponent* in the base-2 fractional
representation `1._significand_ * 2**_exponent_`. Normal IEEE 754
double-width floating point values have indices in the range
`[-1022, +1023]` and subnormal values have indices in the range
Expand Down Expand Up @@ -754,7 +754,7 @@ originating source of truth. In practical terms, this implies the following:
identified in some way.
- Aggregations of metric streams MUST only be written from a single logical
source.
__Note: This implies aggregated metric streams must reach one destination__.
**Note: This implies aggregated metric streams must reach one destination**.

In systems, there is the possibility of multiple writers sending data for the
same metric stream (duplication). For example, if an SDK implementation fails
Expand Down
4 changes: 2 additions & 2 deletions specification/metrics/sdk.md
Original file line number Diff line number Diff line change
Expand Up @@ -548,7 +548,7 @@ By default, explicit bucket histogram aggregation with more than 1 bucket will
use `AlignedHistogramBucketExemplarReservoir`. All other aggregations will use
`SimpleFixedSizeExemplarReservoir`.

*SimpleExemplarReservoir*
_SimpleExemplarReservoir_
This Exemplar reservoir MAY take a configuration parameter for the size of the
reservoir pool. The reservoir will accept measurements using an equivalent of
the [naive reservoir sampling
Expand All @@ -564,7 +564,7 @@ algorithm](https://en.wikipedia.org/wiki/Reservoir_sampling)
Additionally, the `num_measurements_seen` count SHOULD be reset at every
collection cycle.

*AlignedHistogramBucketExemplarReservoir*
_AlignedHistogramBucketExemplarReservoir_
This Exemplar reservoir MUST take a configuration parameter that is the
configuration of a Histogram. This implementation MUST keep the last seen
measurement that falls within a histogram bucket. The reservoir will accept
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ path.
* `http.scheme`, `net.peer.ip`, `net.peer.port`, `http.target`

**[2]** For server metric attributes, `http.url` is usually not readily available on the server side but would have to be assembled in a cumbersome and sometimes lossy process from other information (see e.g. <https:/open-telemetry/opentelemetry-python/pull/148>).
It is thus preferred to supply the raw data that *is* available.
It is thus preferred to supply the raw data that _is_ available.
Namely, one of the following sets is RECOMMENDED (in order of usual preference unless for a particular web server/framework it is known that some other set is preferable for some reason; all strings must be non-empty):

* `http.scheme`, `http.host`, `http.target`
Expand Down
6 changes: 3 additions & 3 deletions specification/schemas/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,9 +239,9 @@ field in the messages:
- The schema_url field in the InstrumentationLibraryLogs message applies to the
contained LogRecord messages.

- If schema_url field is non-empty both in Resource* message and in the
contained InstrumentationLibrary* message then the value in
InstrumentationLibrary* message takes the precedence.
- If schema_url field is non-empty both in Resource\*message and in the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This erased the space between Resource* and message. I don't believe that was intended, right?

contained InstrumentationLibrary\* message then the value in
InstrumentationLibrary\* message takes the precedence.

## API Support

Expand Down
2 changes: 1 addition & 1 deletion specification/sdk-environment-variables.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ usage in Zipkin Exporter configuration:

This will be used to specify whether or not the exporter uses v1 or v2, json,
thrift or protobuf. As of 1.0 of the specification, there
*is no specified default, or configuration via environment variables*.
_is no specified default, or configuration via environment variables_.

## Prometheus Exporter

Expand Down
8 changes: 4 additions & 4 deletions specification/trace/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -286,11 +286,11 @@ the entire operation and, optionally, one or more sub-spans for its sub-operatio
- A list of timestamped [`Event`s](#add-events)
- A [`Status`](#set-status).

The _span name_ concisely identifies the work represented by the Span,
The *span name* concisely identifies the work represented by the Span,
for example, an RPC method name, a function name,
or the name of a subtask or stage within a larger computation.
The span name SHOULD be the most general string that identifies a
(statistically) interesting _class of Spans_,
(statistically) interesting *class of Spans*,
rather than individual Span instances while still being human-readable.
That is, "get_user" is a reasonable name, while "get_user/314159",
where "314159" is a user ID, is not a good name due to its high cardinality.
Expand Down Expand Up @@ -374,14 +374,14 @@ The API MUST accept the following parameters:

Each span has zero or one parent span and zero or more child spans, which
represent causally related operations. A tree of related spans comprises a
trace. A span is said to be a _root span_ if it does not have a parent. Each
trace. A span is said to be a *root span* if it does not have a parent. Each
trace includes a single root span, which is the shared ancestor of all other
spans in the trace. Implementations MUST provide an option to create a `Span` as
a root span, and MUST generate a new `TraceId` for each root span created.
For a Span with a parent, the `TraceId` MUST be the same as the parent.
Also, the child span MUST inherit all `TraceState` values of its parent by default.

A `Span` is said to have a _remote parent_ if it is the child of a `Span`
A `Span` is said to have a *remote parent* if it is the child of a `Span`
created in another process. Each propagators' deserialization must set
`IsRemote` to true on a parent `SpanContext` so `Span` creation knows if the
parent is remote.
Expand Down
2 changes: 1 addition & 1 deletion specification/trace/semantic_conventions/http.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ HTTP spans MUST follow the overall [guidelines for span names](../api.md#span).
Many REST APIs encode parameters into URI path, e.g. `/api/users/123` where `123`
is a user id, which creates high cardinality value space not suitable for span
names. In case of HTTP servers, these endpoints are often mapped by the server
frameworks to more concise _HTTP routes_, e.g. `/api/users/{user_id}`, which are
frameworks to more concise *HTTP routes*, e.g. `/api/users/{user_id}`, which are
recommended as the low cardinality span names. However, the same approach usually
does not work for HTTP client spans, especially when instrumentation is provided
by a lower-level middleware that is not aware of the specifics of how the URIs
Expand Down
4 changes: 2 additions & 2 deletions specification/trace/semantic_conventions/messaging.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ For message consumers, the following additional attributes may be set:
| `process` | process |
<!-- endsemconv -->

The _receive_ span is be used to track the time used for receiving the message(s), whereas the _process_ span(s) track the time for processing the message(s).
The *receive* span is be used to track the time used for receiving the message(s), whereas the *process* span(s) track the time for processing the message(s).
Note that one or multiple Spans with `messaging.operation` = `process` may often be the children of a Span with `messaging.operation` = `receive`.
The distinction between receiving and processing of messages is not always of particular interest or sometimes hidden away in a framework (see the [Message consumption](#message-consumption) section above) and therefore the attribute can be left out.
For batch receiving and processing (see the [Batch receiving](#batch-receiving) and [Batch processing](#batch-processing) examples below) in particular, the attribute SHOULD be set.
Expand All @@ -189,7 +189,7 @@ Instead span kind should be set to either `CONSUMER` or `SERVER` according to th

#### RabbitMQ

In RabbitMQ, the destination is defined by an _exchange_ and a _routing key_.
In RabbitMQ, the destination is defined by an *exchange* and a *routing key*.
`messaging.destination` MUST be set to the name of the exchange. This will be an empty string if the default exchange is used.

<!-- semconv messaging.rabbitmq -->
Expand Down