From Jason Turner

[atomics.order]

Diff to HTML by rtfpessoa

Files changed (1) hide show
  1. tmp/tmpeq9h5hkn/{from.md → to.md} +20 -25
tmp/tmpeq9h5hkn/{from.md → to.md} RENAMED
@@ -1,11 +1,11 @@
1
  ### Order and consistency <a id="atomics.order">[[atomics.order]]</a>
2
 
3
  ``` cpp
4
  namespace std {
5
  enum class memory_order : unspecified {
6
- relaxed, consume, acquire, release, acq_rel, seq_cst
7
  };
8
  }
9
  ```
10
 
11
  The enumeration `memory_order` specifies the detailed regular
@@ -15,21 +15,15 @@ enumerated values and their meanings are as follows:
15
 
16
  - `memory_order::relaxed`: no operation orders memory.
17
  - `memory_order::release`, `memory_order::acq_rel`, and
18
  `memory_order::seq_cst`: a store operation performs a release
19
  operation on the affected memory location.
20
- - `memory_order::consume`: a load operation performs a consume operation
21
- on the affected memory location. \[*Note 1*: Prefer
22
- `memory_order::acquire`, which provides stronger guarantees than
23
- `memory_order::consume`. Implementations have found it infeasible to
24
- provide performance better than that of `memory_order::acquire`.
25
- Specification revisions are under consideration. — *end note*]
26
  - `memory_order::acquire`, `memory_order::acq_rel`, and
27
  `memory_order::seq_cst`: a load operation performs an acquire
28
  operation on the affected memory location.
29
 
30
- [*Note 2*: Atomic operations specifying `memory_order::relaxed` are
31
  relaxed with respect to memory ordering. Implementations must still
32
  guarantee that any given atomic access to a particular atomic object be
33
  indivisible with respect to all other atomic accesses to that
34
  object. — *end note*]
35
 
@@ -64,35 +58,35 @@ S:
64
  - if a `memory_order::seq_cst` fence X happens before A and B is a
65
  `memory_order::seq_cst` operation, then X precedes B in S; and
66
  - if a `memory_order::seq_cst` fence X happens before A and B happens
67
  before a `memory_order::seq_cst` fence Y, then X precedes Y in S.
68
 
69
- [*Note 3*: This definition ensures that S is consistent with the
70
  modification order of any atomic object M. It also ensures that a
71
  `memory_order::seq_cst` load A of M gets its value either from the last
72
  modification of M that precedes A in S or from some
73
  non-`memory_order::seq_cst` modification of M that does not happen
74
  before any modification of M that precedes A in S. — *end note*]
75
 
76
- [*Note 4*: We do not require that S be consistent with “happens before”
77
  [[intro.races]]. This allows more efficient implementation of
78
  `memory_order::acquire` and `memory_order::release` on some machine
79
  architectures. It can produce surprising results when these are mixed
80
  with `memory_order::seq_cst` accesses. — *end note*]
81
 
82
- [*Note 5*: `memory_order::seq_cst` ensures sequential consistency only
83
  for a program that is free of data races and uses exclusively
84
  `memory_order::seq_cst` atomic operations. Any use of weaker ordering
85
  will invalidate this guarantee unless extreme care is used. In many
86
  cases, `memory_order::seq_cst` atomic operations are reorderable with
87
  respect to other atomic operations performed by the same
88
  thread. — *end note*]
89
 
90
  Implementations should ensure that no “out-of-thin-air” values are
91
  computed that circularly depend on their own computation.
92
 
93
- [*Note 6*:
94
 
95
  For example, with `x` and `y` initially zero,
96
 
97
  ``` cpp
98
  // Thread 1:
@@ -111,11 +105,11 @@ store of 42 to `y` is only possible if the store to `x` stores `42`,
111
  which circularly depends on the store to `y` storing `42`. Note that
112
  without this restriction, such an execution is possible.
113
 
114
  — *end note*]
115
 
116
- [*Note 7*:
117
 
118
  The recommendation similarly disallows `r1 == r2 == 42` in the following
119
  example, with `x` and `y` again initially zero:
120
 
121
  ``` cpp
@@ -134,18 +128,19 @@ if (r2 == 42) x.store(42, memory_order::relaxed);
134
 
135
  Atomic read-modify-write operations shall always read the last value (in
136
  the modification order) written before the write associated with the
137
  read-modify-write operation.
138
 
139
- Implementations should make atomic stores visible to atomic loads within
140
- a reasonable amount of time.
141
-
142
- ``` cpp
143
- template<class T>
144
- T kill_dependency(T y) noexcept;
145
- ```
146
-
147
- *Effects:* The argument does not carry a dependency to the return
148
- value [[intro.multithread]].
149
-
150
- *Returns:* `y`.
 
151
 
 
1
  ### Order and consistency <a id="atomics.order">[[atomics.order]]</a>
2
 
3
  ``` cpp
4
  namespace std {
5
  enum class memory_order : unspecified {
6
+ relaxed = 0, acquire = 2, release = 3, acq_rel = 4, seq_cst = 5
7
  };
8
  }
9
  ```
10
 
11
  The enumeration `memory_order` specifies the detailed regular
 
15
 
16
  - `memory_order::relaxed`: no operation orders memory.
17
  - `memory_order::release`, `memory_order::acq_rel`, and
18
  `memory_order::seq_cst`: a store operation performs a release
19
  operation on the affected memory location.
 
 
 
 
 
 
20
  - `memory_order::acquire`, `memory_order::acq_rel`, and
21
  `memory_order::seq_cst`: a load operation performs an acquire
22
  operation on the affected memory location.
23
 
24
+ [*Note 1*: Atomic operations specifying `memory_order::relaxed` are
25
  relaxed with respect to memory ordering. Implementations must still
26
  guarantee that any given atomic access to a particular atomic object be
27
  indivisible with respect to all other atomic accesses to that
28
  object. — *end note*]
29
 
 
58
  - if a `memory_order::seq_cst` fence X happens before A and B is a
59
  `memory_order::seq_cst` operation, then X precedes B in S; and
60
  - if a `memory_order::seq_cst` fence X happens before A and B happens
61
  before a `memory_order::seq_cst` fence Y, then X precedes Y in S.
62
 
63
+ [*Note 2*: This definition ensures that S is consistent with the
64
  modification order of any atomic object M. It also ensures that a
65
  `memory_order::seq_cst` load A of M gets its value either from the last
66
  modification of M that precedes A in S or from some
67
  non-`memory_order::seq_cst` modification of M that does not happen
68
  before any modification of M that precedes A in S. — *end note*]
69
 
70
+ [*Note 3*: We do not require that S be consistent with “happens before”
71
  [[intro.races]]. This allows more efficient implementation of
72
  `memory_order::acquire` and `memory_order::release` on some machine
73
  architectures. It can produce surprising results when these are mixed
74
  with `memory_order::seq_cst` accesses. — *end note*]
75
 
76
+ [*Note 4*: `memory_order::seq_cst` ensures sequential consistency only
77
  for a program that is free of data races and uses exclusively
78
  `memory_order::seq_cst` atomic operations. Any use of weaker ordering
79
  will invalidate this guarantee unless extreme care is used. In many
80
  cases, `memory_order::seq_cst` atomic operations are reorderable with
81
  respect to other atomic operations performed by the same
82
  thread. — *end note*]
83
 
84
  Implementations should ensure that no “out-of-thin-air” values are
85
  computed that circularly depend on their own computation.
86
 
87
+ [*Note 5*:
88
 
89
  For example, with `x` and `y` initially zero,
90
 
91
  ``` cpp
92
  // Thread 1:
 
105
  which circularly depends on the store to `y` storing `42`. Note that
106
  without this restriction, such an execution is possible.
107
 
108
  — *end note*]
109
 
110
+ [*Note 6*:
111
 
112
  The recommendation similarly disallows `r1 == r2 == 42` in the following
113
  example, with `x` and `y` again initially zero:
114
 
115
  ``` cpp
 
128
 
129
  Atomic read-modify-write operations shall always read the last value (in
130
  the modification order) written before the write associated with the
131
  read-modify-write operation.
132
 
133
+ An *atomic modify-write operation* is an atomic read-modify-write
134
+ operation with weaker synchronization requirements as specified in 
135
+ [[atomics.fences]].
136
+
137
+ [*Note 7*: The intent is for atomic modify-write operations to be
138
+ implemented using mechanisms that are not ordered, in hardware, by the
139
+ implementation of acquire fences. No other semantic or hardware property
140
+ (e.g., that the mechanism is a far atomic operation) is
141
+ implied. *end note*]
142
+
143
+ *Recommended practice:* The implementation should make atomic stores
144
+ visible to atomic loads, and atomic loads should observe atomic stores,
145
+ within a reasonable amount of time.
146