From Jason Turner

[atomics.ref.ops]

Diff to HTML by rtfpessoa

Files changed (1) hide show
  1. tmp/tmp5oftofzt/{from.md → to.md} +191 -0
tmp/tmp5oftofzt/{from.md → to.md} RENAMED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### Operations <a id="atomics.ref.ops">[[atomics.ref.ops]]</a>
2
+
3
+ ``` cpp
4
+ static constexpr size_t required_alignment;
5
+ ```
6
+
7
+ The alignment required for an object to be referenced by an atomic
8
+ reference, which is at least `alignof(T)`.
9
+
10
+ [*Note 1*: Hardware could require an object referenced by an
11
+ `atomic_ref` to have stricter alignment [[basic.align]] than other
12
+ objects of type `T`. Further, whether operations on an `atomic_ref` are
13
+ lock-free could depend on the alignment of the referenced object. For
14
+ example, lock-free operations on `std::complex<double>` could be
15
+ supported only if aligned to `2*alignof(double)`. — *end note*]
16
+
17
+ ``` cpp
18
+ static constexpr bool is_always_lock_free;
19
+ ```
20
+
21
+ The static data member `is_always_lock_free` is `true` if the
22
+ `atomic_ref` type’s operations are always lock-free, and `false`
23
+ otherwise.
24
+
25
+ ``` cpp
26
+ bool is_lock_free() const noexcept;
27
+ ```
28
+
29
+ *Returns:* `true` if operations on all objects of the type
30
+ `atomic_ref<T>` are lock-free, `false` otherwise.
31
+
32
+ ``` cpp
33
+ atomic_ref(T& obj);
34
+ ```
35
+
36
+ *Preconditions:* The referenced object is aligned to
37
+ `required_alignment`.
38
+
39
+ *Ensures:* `*this` references `obj`.
40
+
41
+ *Throws:* Nothing.
42
+
43
+ ``` cpp
44
+ atomic_ref(const atomic_ref& ref) noexcept;
45
+ ```
46
+
47
+ *Ensures:* `*this` references the object referenced by `ref`.
48
+
49
+ ``` cpp
50
+ void store(T desired, memory_order order = memory_order::seq_cst) const noexcept;
51
+ ```
52
+
53
+ *Preconditions:* The `order` argument is neither
54
+ `memory_order::consume`, `memory_order::acquire`, nor
55
+ `memory_order::acq_rel`.
56
+
57
+ *Effects:* Atomically replaces the value referenced by `*ptr` with the
58
+ value of `desired`. Memory is affected according to the value of
59
+ `order`.
60
+
61
+ ``` cpp
62
+ T operator=(T desired) const noexcept;
63
+ ```
64
+
65
+ *Effects:* Equivalent to:
66
+
67
+ ``` cpp
68
+ store(desired);
69
+ return desired;
70
+ ```
71
+
72
+ ``` cpp
73
+ T load(memory_order order = memory_order::seq_cst) const noexcept;
74
+ ```
75
+
76
+ *Preconditions:* The `order` argument is neither `memory_order::release`
77
+ nor `memory_order::acq_rel`.
78
+
79
+ *Effects:* Memory is affected according to the value of `order`.
80
+
81
+ *Returns:* Atomically returns the value referenced by `*ptr`.
82
+
83
+ ``` cpp
84
+ operator T() const noexcept;
85
+ ```
86
+
87
+ *Effects:* Equivalent to: `return load();`
88
+
89
+ ``` cpp
90
+ T exchange(T desired, memory_order order = memory_order::seq_cst) const noexcept;
91
+ ```
92
+
93
+ *Effects:* Atomically replaces the value referenced by `*ptr` with
94
+ `desired`. Memory is affected according to the value of `order`. This
95
+ operation is an atomic read-modify-write
96
+ operation [[intro.multithread]].
97
+
98
+ *Returns:* Atomically returns the value referenced by `*ptr` immediately
99
+ before the effects.
100
+
101
+ ``` cpp
102
+ bool compare_exchange_weak(T& expected, T desired,
103
+ memory_order success, memory_order failure) const noexcept;
104
+
105
+ bool compare_exchange_strong(T& expected, T desired,
106
+ memory_order success, memory_order failure) const noexcept;
107
+
108
+ bool compare_exchange_weak(T& expected, T desired,
109
+ memory_order order = memory_order::seq_cst) const noexcept;
110
+
111
+ bool compare_exchange_strong(T& expected, T desired,
112
+ memory_order order = memory_order::seq_cst) const noexcept;
113
+ ```
114
+
115
+ *Preconditions:* The `failure` argument is neither
116
+ `memory_order::release` nor `memory_order::acq_rel`.
117
+
118
+ *Effects:* Retrieves the value in `expected`. It then atomically
119
+ compares the value representation of the value referenced by `*ptr` for
120
+ equality with that previously retrieved from `expected`, and if `true`,
121
+ replaces the value referenced by `*ptr` with that in `desired`. If and
122
+ only if the comparison is `true`, memory is affected according to the
123
+ value of `success`, and if the comparison is `false`, memory is affected
124
+ according to the value of `failure`. When only one `memory_order`
125
+ argument is supplied, the value of `success` is `order`, and the value
126
+ of `failure` is `order` except that a value of `memory_order::acq_rel`
127
+ shall be replaced by the value `memory_order::acquire` and a value of
128
+ `memory_order::release` shall be replaced by the value
129
+ `memory_order::relaxed`. If and only if the comparison is `false` then,
130
+ after the atomic operation, the value in `expected` is replaced by the
131
+ value read from the value referenced by `*ptr` during the atomic
132
+ comparison. If the operation returns `true`, these operations are atomic
133
+ read-modify-write operations [[intro.races]] on the value referenced by
134
+ `*ptr`. Otherwise, these operations are atomic load operations on that
135
+ memory.
136
+
137
+ *Returns:* The result of the comparison.
138
+
139
+ *Remarks:* A weak compare-and-exchange operation may fail spuriously.
140
+ That is, even when the contents of memory referred to by `expected` and
141
+ `ptr` are equal, it may return `false` and store back to `expected` the
142
+ same memory contents that were originally there.
143
+
144
+ [*Note 2*: This spurious failure enables implementation of
145
+ compare-and-exchange on a broader class of machines, e.g., load-locked
146
+ store-conditional machines. A consequence of spurious failure is that
147
+ nearly all uses of weak compare-and-exchange will be in a loop. When a
148
+ compare-and-exchange is in a loop, the weak version will yield better
149
+ performance on some platforms. When a weak compare-and-exchange would
150
+ require a loop and a strong one would not, the strong one is
151
+ preferable. — *end note*]
152
+
153
+ ``` cpp
154
+ void wait(T old, memory_order order = memory_order::seq_cst) const noexcept;
155
+ ```
156
+
157
+ *Preconditions:* `order` is neither `memory_order::release` nor
158
+ `memory_order::acq_rel`.
159
+
160
+ *Effects:* Repeatedly performs the following steps, in order:
161
+
162
+ - Evaluates `load(order)` and compares its value representation for
163
+ equality against that of `old`.
164
+ - If they compare unequal, returns.
165
+ - Blocks until it is unblocked by an atomic notifying operation or is
166
+ unblocked spuriously.
167
+
168
+ *Remarks:* This function is an atomic waiting operation [[atomics.wait]]
169
+ on atomic object `*ptr`.
170
+
171
+ ``` cpp
172
+ void notify_one() const noexcept;
173
+ ```
174
+
175
+ *Effects:* Unblocks the execution of at least one atomic waiting
176
+ operation on `*ptr` that is eligible to be unblocked [[atomics.wait]] by
177
+ this call, if any such atomic waiting operations exist.
178
+
179
+ *Remarks:* This function is an atomic notifying
180
+ operation [[atomics.wait]] on atomic object `*ptr`.
181
+
182
+ ``` cpp
183
+ void notify_all() const noexcept;
184
+ ```
185
+
186
+ *Effects:* Unblocks the execution of all atomic waiting operations on
187
+ `*ptr` that are eligible to be unblocked [[atomics.wait]] by this call.
188
+
189
+ *Remarks:* This function is an atomic notifying
190
+ operation [[atomics.wait]] on atomic object `*ptr`.
191
+