From Jason Turner

[atomics.order]

Diff to HTML by rtfpessoa

Files changed (1) hide show
  1. tmp/tmprj3swsmx/{from.md → to.md} +23 -57
tmp/tmprj3swsmx/{from.md → to.md} RENAMED
@@ -40,16 +40,14 @@ orders for all affected locations, such that each `memory_order_seq_cst`
40
  operation *B* that loads a value from an atomic object *M* observes one
41
  of the following values:
42
 
43
  - the result of the last modification *A* of *M* that precedes *B* in
44
  *S*, if it exists, or
45
- - if *A* exists, the result of some modification of *M* in the visible
46
- sequence of side effects with respect to *B* that is not
47
  `memory_order_seq_cst` and that does not happen before *A*, or
48
- - if *A* does not exist, the result of some modification of *M* in the
49
- visible sequence of side effects with respect to *B* that is not
50
- `memory_order_seq_cst`.
51
 
52
  Although it is not explicitly required that *S* include locks, it can
53
  always be extended to an order that does include lock and unlock
54
  operations, since the ordering between those is already included in the
55
  “happens before” ordering.
@@ -71,63 +69,33 @@ modifies *M* and *B* takes its value, if there are
71
  `memory_order_seq_cst` fences *X* and *Y* such that *A* is sequenced
72
  before *X*, *Y* is sequenced before *B*, and *X* precedes *Y* in *S*,
73
  then *B* observes either the effects of *A* or a later modification of
74
  *M* in its modification order.
75
 
76
- For atomic operations *A* and *B* on an atomic object *M*, if there are
77
- `memory_order_seq_cst` fences `X` and `Y` such that *A* is sequenced
78
- before *X*, *Y* is sequenced before *B*, and *X* precedes *Y* in *S*,
79
- then *B* occurs later than *A* in the modification order of *M*.
 
 
 
 
 
 
80
 
81
  `memory_order_seq_cst` ensures sequential consistency only for a program
82
  that is free of data races and uses exclusively `memory_order_seq_cst`
83
  operations. Any use of weaker ordering will invalidate this guarantee
84
  unless extreme care is used. In particular, `memory_order_seq_cst`
85
  fences ensure a total order only for the fences themselves. Fences
86
  cannot, in general, be used to restore sequential consistency for atomic
87
  operations with weaker ordering specifications.
88
 
89
- An atomic store shall only store a value that has been computed from
90
- constants and program input values by a finite sequence of program
91
- evaluations, such that each evaluation observes the values of variables
92
- as computed by the last prior assignment in the sequence. The ordering
93
- of evaluations in this sequence shall be such that:
94
 
95
- - if an evaluation *B* observes a value computed by *A* in a different
96
- thread, then *B* does not happen before *A*, and
97
- - if an evaluation *A* is included in the sequence, then every
98
- evaluation that assigns to the same variable and happens before *A* is
99
- included.
100
-
101
- The second requirement disallows “out-of-thin-air” or “speculative”
102
- stores of atomics when relaxed atomics are used. Since unordered
103
- operations are involved, evaluations may appear in this sequence out of
104
- thread order. For example, with `x` and `y` initially zero,
105
-
106
- ``` cpp
107
- // Thread 1:
108
- r1 = y.load(memory_order_relaxed);
109
- x.store(r1, memory_order_relaxed);
110
- ```
111
-
112
- ``` cpp
113
- // Thread 2:
114
- r2 = x.load(memory_order_relaxed);
115
- y.store(42, memory_order_relaxed);
116
- ```
117
-
118
- is allowed to produce `r1 = r2 = 42`. The sequence of evaluations
119
- justifying this consists of:
120
-
121
- ``` cpp
122
- y.store(42, memory_order_relaxed);
123
- r1 = y.load(memory_order_relaxed);
124
- x.store(r1, memory_order_relaxed);
125
- r2 = x.load(memory_order_relaxed);
126
- ```
127
-
128
- On the other hand,
129
 
130
  ``` cpp
131
  // Thread 1:
132
  r1 = y.load(memory_order_relaxed);
133
  x.store(r1, memory_order_relaxed);
@@ -137,32 +105,30 @@ x.store(r1, memory_order_relaxed);
137
  // Thread 2:
138
  r2 = x.load(memory_order_relaxed);
139
  y.store(r2, memory_order_relaxed);
140
  ```
141
 
142
- may not produce `r1 = r2 = 42`, since there is no sequence of
143
- evaluations that results in the computation of 42. In the absence of
144
- “relaxed” operations and read-modify-write operations with weaker than
145
- `memory_order_acq_rel` ordering, the second requirement has no impact.
146
 
147
- The requirements do allow `r1 == r2 == 42` in the following example,
148
- with `x` and `y` initially zero:
149
 
150
  ``` cpp
151
  // Thread 1:
152
  r1 = x.load(memory_order_relaxed);
153
- if (r1 == 42) y.store(r1, memory_order_relaxed);
154
  ```
155
 
156
  ``` cpp
157
  // Thread 2:
158
  r2 = y.load(memory_order_relaxed);
159
  if (r2 == 42) x.store(42, memory_order_relaxed);
160
  ```
161
 
162
- However, implementations should not allow such behavior.
163
-
164
  Atomic read-modify-write operations shall always read the last value (in
165
  the modification order) written before the write associated with the
166
  read-modify-write operation.
167
 
168
  Implementations should make atomic stores visible to atomic loads within
 
40
  operation *B* that loads a value from an atomic object *M* observes one
41
  of the following values:
42
 
43
  - the result of the last modification *A* of *M* that precedes *B* in
44
  *S*, if it exists, or
45
+ - if *A* exists, the result of some modification of *M* that is not
 
46
  `memory_order_seq_cst` and that does not happen before *A*, or
47
+ - if *A* does not exist, the result of some modification of *M* that is
48
+ not `memory_order_seq_cst`.
 
49
 
50
  Although it is not explicitly required that *S* include locks, it can
51
  always be extended to an order that does include lock and unlock
52
  operations, since the ordering between those is already included in the
53
  “happens before” ordering.
 
69
  `memory_order_seq_cst` fences *X* and *Y* such that *A* is sequenced
70
  before *X*, *Y* is sequenced before *B*, and *X* precedes *Y* in *S*,
71
  then *B* observes either the effects of *A* or a later modification of
72
  *M* in its modification order.
73
 
74
+ For atomic modifications *A* and *B* of an atomic object *M*, *B* occurs
75
+ later than *A* in the modification order of *M* if:
76
+
77
+ - there is a `memory_order_seq_cst` fence *X* such that *A* is sequenced
78
+ before *X*, and *X* precedes *B* in *S*, or
79
+ - there is a `memory_order_seq_cst` fence *Y* such that *Y* is sequenced
80
+ before *B*, and *A* precedes *Y* in *S*, or
81
+ - there are `memory_order_seq_cst` fences *X* and *Y* such that *A* is
82
+ sequenced before *X*, *Y* is sequenced before *B*, and *X* precedes
83
+ *Y* in *S*.
84
 
85
  `memory_order_seq_cst` ensures sequential consistency only for a program
86
  that is free of data races and uses exclusively `memory_order_seq_cst`
87
  operations. Any use of weaker ordering will invalidate this guarantee
88
  unless extreme care is used. In particular, `memory_order_seq_cst`
89
  fences ensure a total order only for the fences themselves. Fences
90
  cannot, in general, be used to restore sequential consistency for atomic
91
  operations with weaker ordering specifications.
92
 
93
+ Implementations should ensure that no “out-of-thin-air” values are
94
+ computed that circularly depend on their own computation.
 
 
 
95
 
96
+ For example, with `x` and `y` initially zero,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
 
98
  ``` cpp
99
  // Thread 1:
100
  r1 = y.load(memory_order_relaxed);
101
  x.store(r1, memory_order_relaxed);
 
105
  // Thread 2:
106
  r2 = x.load(memory_order_relaxed);
107
  y.store(r2, memory_order_relaxed);
108
  ```
109
 
110
+ should not produce `r1 == r2 == 42`, since the store of 42 to `y` is
111
+ only possible if the store to `x` stores `42`, which circularly depends
112
+ on the store to `y` storing `42`. Note that without this restriction,
113
+ such an execution is possible.
114
 
115
+ The recommendation similarly disallows `r1 == r2 == 42` in the following
116
+ example, with `x` and `y` again initially zero:
117
 
118
  ``` cpp
119
  // Thread 1:
120
  r1 = x.load(memory_order_relaxed);
121
+ if (r1 == 42) y.store(42, memory_order_relaxed);
122
  ```
123
 
124
  ``` cpp
125
  // Thread 2:
126
  r2 = y.load(memory_order_relaxed);
127
  if (r2 == 42) x.store(42, memory_order_relaxed);
128
  ```
129
 
 
 
130
  Atomic read-modify-write operations shall always read the last value (in
131
  the modification order) written before the write associated with the
132
  read-modify-write operation.
133
 
134
  Implementations should make atomic stores visible to atomic loads within