From Jason Turner

[atomics.order]

Diff to HTML by rtfpessoa

Files changed (1) hide show
  1. tmp/tmpcbqd38_c/{from.md → to.md} +32 -18
tmp/tmpcbqd38_c/{from.md → to.md} RENAMED
@@ -1,13 +1,13 @@
1
  ## Order and consistency <a id="atomics.order">[[atomics.order]]</a>
2
 
3
  ``` cpp
4
  namespace std {
5
- typedef enum memory_order {
6
  memory_order_relaxed, memory_order_consume, memory_order_acquire,
7
  memory_order_release, memory_order_acq_rel, memory_order_seq_cst
8
- } memory_order;
9
  }
10
  ```
11
 
12
  The enumeration `memory_order` specifies the detailed regular
13
  (non-atomic) memory synchronization order as defined in
@@ -17,19 +17,24 @@ enumerated values and their meanings are as follows:
17
  - `memory_order_relaxed`: no operation orders memory.
18
  - `memory_order_release`, `memory_order_acq_rel`, and
19
  `memory_order_seq_cst`: a store operation performs a release operation
20
  on the affected memory location.
21
  - `memory_order_consume`: a load operation performs a consume operation
22
- on the affected memory location.
 
 
 
 
23
  - `memory_order_acquire`, `memory_order_acq_rel`, and
24
  `memory_order_seq_cst`: a load operation performs an acquire operation
25
  on the affected memory location.
26
 
27
- Atomic operations specifying `memory_order_relaxed` are relaxed with
28
- respect to memory ordering. Implementations must still guarantee that
29
- any given atomic access to a particular atomic object be indivisible
30
- with respect to all other atomic accesses to that object.
 
31
 
32
  An atomic operation *A* that performs a release operation on an atomic
33
  object *M* synchronizes with an atomic operation *B* that performs an
34
  acquire operation on *M* and takes its value from any side effect in the
35
  release sequence headed by *A*.
@@ -45,14 +50,14 @@ of the following values:
45
  - if *A* exists, the result of some modification of *M* that is not
46
  `memory_order_seq_cst` and that does not happen before *A*, or
47
  - if *A* does not exist, the result of some modification of *M* that is
48
  not `memory_order_seq_cst`.
49
 
50
- Although it is not explicitly required that *S* include locks, it can
51
- always be extended to an order that does include lock and unlock
52
- operations, since the ordering between those is already included in the
53
- “happens before” ordering.
54
 
55
  For an atomic operation *B* that reads the value of an atomic object
56
  *M*, if there is a `memory_order_seq_cst` fence *X* sequenced before
57
  *B*, then *B* observes either the last `memory_order_seq_cst`
58
  modification of *M* preceding *X* in the total order *S* or a later
@@ -80,21 +85,24 @@ later than *A* in the modification order of *M* if:
80
  before *B*, and *A* precedes *Y* in *S*, or
81
  - there are `memory_order_seq_cst` fences *X* and *Y* such that *A* is
82
  sequenced before *X*, *Y* is sequenced before *B*, and *X* precedes
83
  *Y* in *S*.
84
 
85
- `memory_order_seq_cst` ensures sequential consistency only for a program
86
- that is free of data races and uses exclusively `memory_order_seq_cst`
87
- operations. Any use of weaker ordering will invalidate this guarantee
88
- unless extreme care is used. In particular, `memory_order_seq_cst`
89
- fences ensure a total order only for the fences themselves. Fences
90
- cannot, in general, be used to restore sequential consistency for atomic
91
- operations with weaker ordering specifications.
 
92
 
93
  Implementations should ensure that no “out-of-thin-air” values are
94
  computed that circularly depend on their own computation.
95
 
 
 
96
  For example, with `x` and `y` initially zero,
97
 
98
  ``` cpp
99
  // Thread 1:
100
  r1 = y.load(memory_order_relaxed);
@@ -110,10 +118,14 @@ y.store(r2, memory_order_relaxed);
110
  should not produce `r1 == r2 == 42`, since the store of 42 to `y` is
111
  only possible if the store to `x` stores `42`, which circularly depends
112
  on the store to `y` storing `42`. Note that without this restriction,
113
  such an execution is possible.
114
 
 
 
 
 
115
  The recommendation similarly disallows `r1 == r2 == 42` in the following
116
  example, with `x` and `y` again initially zero:
117
 
118
  ``` cpp
119
  // Thread 1:
@@ -125,10 +137,12 @@ if (r1 == 42) y.store(42, memory_order_relaxed);
125
  // Thread 2:
126
  r2 = y.load(memory_order_relaxed);
127
  if (r2 == 42) x.store(42, memory_order_relaxed);
128
  ```
129
 
 
 
130
  Atomic read-modify-write operations shall always read the last value (in
131
  the modification order) written before the write associated with the
132
  read-modify-write operation.
133
 
134
  Implementations should make atomic stores visible to atomic loads within
 
1
  ## Order and consistency <a id="atomics.order">[[atomics.order]]</a>
2
 
3
  ``` cpp
4
  namespace std {
5
+ enum memory_order {
6
  memory_order_relaxed, memory_order_consume, memory_order_acquire,
7
  memory_order_release, memory_order_acq_rel, memory_order_seq_cst
8
+ };
9
  }
10
  ```
11
 
12
  The enumeration `memory_order` specifies the detailed regular
13
  (non-atomic) memory synchronization order as defined in
 
17
  - `memory_order_relaxed`: no operation orders memory.
18
  - `memory_order_release`, `memory_order_acq_rel`, and
19
  `memory_order_seq_cst`: a store operation performs a release operation
20
  on the affected memory location.
21
  - `memory_order_consume`: a load operation performs a consume operation
22
+ on the affected memory location. \[*Note 1*: Prefer
23
+ `memory_order_acquire`, which provides stronger guarantees than
24
+ `memory_order_consume`. Implementations have found it infeasible to
25
+ provide performance better than that of `memory_order_acquire`.
26
+ Specification revisions are under consideration. — *end note*]
27
  - `memory_order_acquire`, `memory_order_acq_rel`, and
28
  `memory_order_seq_cst`: a load operation performs an acquire operation
29
  on the affected memory location.
30
 
31
+ [*Note 2*: Atomic operations specifying `memory_order_relaxed` are
32
+ relaxed with respect to memory ordering. Implementations must still
33
+ guarantee that any given atomic access to a particular atomic object be
34
+ indivisible with respect to all other atomic accesses to that
35
+ object. — *end note*]
36
 
37
  An atomic operation *A* that performs a release operation on an atomic
38
  object *M* synchronizes with an atomic operation *B* that performs an
39
  acquire operation on *M* and takes its value from any side effect in the
40
  release sequence headed by *A*.
 
50
  - if *A* exists, the result of some modification of *M* that is not
51
  `memory_order_seq_cst` and that does not happen before *A*, or
52
  - if *A* does not exist, the result of some modification of *M* that is
53
  not `memory_order_seq_cst`.
54
 
55
+ [*Note 3*: Although it is not explicitly required that *S* include
56
+ locks, it can always be extended to an order that does include lock and
57
+ unlock operations, since the ordering between those is already included
58
+ in the “happens before” ordering. — *end note*]
59
 
60
  For an atomic operation *B* that reads the value of an atomic object
61
  *M*, if there is a `memory_order_seq_cst` fence *X* sequenced before
62
  *B*, then *B* observes either the last `memory_order_seq_cst`
63
  modification of *M* preceding *X* in the total order *S* or a later
 
85
  before *B*, and *A* precedes *Y* in *S*, or
86
  - there are `memory_order_seq_cst` fences *X* and *Y* such that *A* is
87
  sequenced before *X*, *Y* is sequenced before *B*, and *X* precedes
88
  *Y* in *S*.
89
 
90
+ [*Note 4*: `memory_order_seq_cst` ensures sequential consistency only
91
+ for a program that is free of data races and uses exclusively
92
+ `memory_order_seq_cst` operations. Any use of weaker ordering will
93
+ invalidate this guarantee unless extreme care is used. In particular,
94
+ `memory_order_seq_cst` fences ensure a total order only for the fences
95
+ themselves. Fences cannot, in general, be used to restore sequential
96
+ consistency for atomic operations with weaker ordering
97
+ specifications. — *end note*]
98
 
99
  Implementations should ensure that no “out-of-thin-air” values are
100
  computed that circularly depend on their own computation.
101
 
102
+ [*Note 5*:
103
+
104
  For example, with `x` and `y` initially zero,
105
 
106
  ``` cpp
107
  // Thread 1:
108
  r1 = y.load(memory_order_relaxed);
 
118
  should not produce `r1 == r2 == 42`, since the store of 42 to `y` is
119
  only possible if the store to `x` stores `42`, which circularly depends
120
  on the store to `y` storing `42`. Note that without this restriction,
121
  such an execution is possible.
122
 
123
+ — *end note*]
124
+
125
+ [*Note 6*:
126
+
127
  The recommendation similarly disallows `r1 == r2 == 42` in the following
128
  example, with `x` and `y` again initially zero:
129
 
130
  ``` cpp
131
  // Thread 1:
 
137
  // Thread 2:
138
  r2 = y.load(memory_order_relaxed);
139
  if (r2 == 42) x.store(42, memory_order_relaxed);
140
  ```
141
 
142
+ — *end note*]
143
+
144
  Atomic read-modify-write operations shall always read the last value (in
145
  the modification order) written before the write associated with the
146
  read-modify-write operation.
147
 
148
  Implementations should make atomic stores visible to atomic loads within