From Jason Turner

[atomics.types.generic]

Diff to HTML by rtfpessoa

Files changed (1) hide show
  1. tmp/tmpu7v_j7ov/{from.md → to.md} +955 -195
tmp/tmpu7v_j7ov/{from.md → to.md} RENAMED
@@ -2,113 +2,106 @@
2
 
3
  ``` cpp
4
  namespace std {
5
  template<class T> struct atomic {
6
  using value_type = T;
 
7
  static constexpr bool is_always_lock_free = implementation-defined // whether a given atomic type's operations are always lock free;
8
  bool is_lock_free() const volatile noexcept;
9
  bool is_lock_free() const noexcept;
10
- void store(T, memory_order = memory_order_seq_cst) volatile noexcept;
11
- void store(T, memory_order = memory_order_seq_cst) noexcept;
12
- T load(memory_order = memory_order_seq_cst) const volatile noexcept;
13
- T load(memory_order = memory_order_seq_cst) const noexcept;
 
 
 
 
 
 
14
  operator T() const volatile noexcept;
15
  operator T() const noexcept;
16
- T exchange(T, memory_order = memory_order_seq_cst) volatile noexcept;
17
- T exchange(T, memory_order = memory_order_seq_cst) noexcept;
 
 
 
 
 
18
  bool compare_exchange_weak(T&, T, memory_order, memory_order) volatile noexcept;
19
  bool compare_exchange_weak(T&, T, memory_order, memory_order) noexcept;
20
  bool compare_exchange_strong(T&, T, memory_order, memory_order) volatile noexcept;
21
  bool compare_exchange_strong(T&, T, memory_order, memory_order) noexcept;
22
- bool compare_exchange_weak(T&, T, memory_order = memory_order_seq_cst) volatile noexcept;
23
- bool compare_exchange_weak(T&, T, memory_order = memory_order_seq_cst) noexcept;
24
- bool compare_exchange_strong(T&, T, memory_order = memory_order_seq_cst) volatile noexcept;
25
- bool compare_exchange_strong(T&, T, memory_order = memory_order_seq_cst) noexcept;
26
 
27
- atomic() noexcept = default;
28
- constexpr atomic(T) noexcept;
29
- atomic(const atomic&) = delete;
30
- atomic& operator=(const atomic&) = delete;
31
- atomic& operator=(const atomic&) volatile = delete;
32
- T operator=(T) volatile noexcept;
33
- T operator=(T) noexcept;
34
  };
35
  }
36
  ```
37
 
38
- The template argument for `T` shall be trivially copyable (
39
- [[basic.types]]).
 
 
 
 
 
 
 
 
 
40
 
41
  [*Note 1*: Type arguments that are not also statically initializable
42
  may be difficult to use. — *end note*]
43
 
44
  The specialization `atomic<bool>` is a standard-layout struct.
45
 
46
  [*Note 2*: The representation of an atomic specialization need not have
47
- the same size as its corresponding argument type. Specializations should
48
- have the same size whenever possible, as this reduces the effort
49
- required to port existing code. — *end note*]
50
 
51
  ### Operations on atomic types <a id="atomics.types.operations">[[atomics.types.operations]]</a>
52
 
53
- [*Note 1*: Many operations are volatile-qualified. The “volatile as
54
- device register” semantics have not changed in the standard. This
55
- qualification means that volatility is preserved when applying these
56
- operations to volatile objects. It does not mean that operations on
57
- non-volatile objects become volatile. — *end note*]
58
-
59
  ``` cpp
60
- atomic() noexcept = default;
61
  ```
62
 
63
- *Effects:* Leaves the atomic object in an uninitialized state.
64
 
65
- [*Note 1*: These semantics ensure compatibility with C. *end note*]
 
66
 
67
  ``` cpp
68
  constexpr atomic(T desired) noexcept;
69
  ```
70
 
71
  *Effects:* Initializes the object with the value `desired`.
72
- Initialization is not an atomic operation ([[intro.multithread]]).
73
 
74
- [*Note 2*: It is possible to have an access to an atomic object `A`
75
  race with its construction, for example by communicating the address of
76
  the just-constructed object `A` to another thread via
77
- `memory_order_relaxed` operations on a suitable atomic pointer variable,
78
- and then immediately accessing `A` in the receiving thread. This results
79
- in undefined behavior. — *end note*]
80
-
81
- ``` cpp
82
- #define ATOMIC_VAR_INIT(value) see below
83
- ```
84
-
85
- The macro expands to a token sequence suitable for constant
86
- initialization of an atomic variable of static storage duration of a
87
- type that is initialization-compatible with `value`.
88
-
89
- [*Note 3*: This operation may need to initialize locks. — *end note*]
90
-
91
- Concurrent access to the variable being initialized, even via an atomic
92
- operation, constitutes a data race.
93
-
94
- [*Example 1*:
95
-
96
- ``` cpp
97
- atomic<int> v = ATOMIC_VAR_INIT(5);
98
- ```
99
-
100
- — *end example*]
101
 
102
  ``` cpp
103
  static constexpr bool is_always_lock_free = implementation-defined // whether a given atomic type's operations are always lock free;
104
  ```
105
 
106
  The `static` data member `is_always_lock_free` is `true` if the atomic
107
  type’s operations are always lock-free, and `false` otherwise.
108
 
109
- [*Note 4*: The value of `is_always_lock_free` is consistent with the
110
  value of the corresponding `ATOMIC_..._LOCK_FREE` macro, if
111
  defined. — *end note*]
112
 
113
  ``` cpp
114
  bool is_lock_free() const volatile noexcept;
@@ -116,63 +109,79 @@ bool is_lock_free() const noexcept;
116
  ```
117
 
118
  *Returns:* `true` if the object’s operations are lock-free, `false`
119
  otherwise.
120
 
121
- [*Note 5*: The return value of the `is_lock_free` member function is
122
  consistent with the value of `is_always_lock_free` for the same
123
  type. — *end note*]
124
 
125
  ``` cpp
126
- void store(T desired, memory_order order = memory_order_seq_cst) volatile noexcept;
127
- void store(T desired, memory_order order = memory_order_seq_cst) noexcept;
128
  ```
129
 
130
- *Requires:* The `order` argument shall not be `memory_order_consume`,
131
- `memory_order_acquire`, nor `memory_order_acq_rel`.
 
 
 
 
132
 
133
  *Effects:* Atomically replaces the value pointed to by `this` with the
134
  value of `desired`. Memory is affected according to the value of
135
  `order`.
136
 
137
  ``` cpp
138
  T operator=(T desired) volatile noexcept;
139
  T operator=(T desired) noexcept;
140
  ```
141
 
142
- *Effects:* Equivalent to: `store(desired)`.
 
 
 
143
 
144
  *Returns:* `desired`.
145
 
146
  ``` cpp
147
- T load(memory_order order = memory_order_seq_cst) const volatile noexcept;
148
- T load(memory_order order = memory_order_seq_cst) const noexcept;
149
  ```
150
 
151
- *Requires:* The `order` argument shall not be `memory_order_release` nor
152
- `memory_order_acq_rel`.
 
 
 
153
 
154
  *Effects:* Memory is affected according to the value of `order`.
155
 
156
  *Returns:* Atomically returns the value pointed to by `this`.
157
 
158
  ``` cpp
159
  operator T() const volatile noexcept;
160
  operator T() const noexcept;
161
  ```
162
 
 
 
 
163
  *Effects:* Equivalent to: `return load();`
164
 
165
  ``` cpp
166
- T exchange(T desired, memory_order order = memory_order_seq_cst) volatile noexcept;
167
- T exchange(T desired, memory_order order = memory_order_seq_cst) noexcept;
168
  ```
169
 
 
 
 
170
  *Effects:* Atomically replaces the value pointed to by `this` with
171
  `desired`. Memory is affected according to the value of `order`. These
172
  operations are atomic read-modify-write
173
- operations ([[intro.multithread]]).
174
 
175
  *Returns:* Atomically returns the value pointed to by `this` immediately
176
  before the effects.
177
 
178
  ``` cpp
@@ -183,57 +192,60 @@ bool compare_exchange_weak(T& expected, T desired,
183
  bool compare_exchange_strong(T& expected, T desired,
184
  memory_order success, memory_order failure) volatile noexcept;
185
  bool compare_exchange_strong(T& expected, T desired,
186
  memory_order success, memory_order failure) noexcept;
187
  bool compare_exchange_weak(T& expected, T desired,
188
- memory_order order = memory_order_seq_cst) volatile noexcept;
189
  bool compare_exchange_weak(T& expected, T desired,
190
- memory_order order = memory_order_seq_cst) noexcept;
191
  bool compare_exchange_strong(T& expected, T desired,
192
- memory_order order = memory_order_seq_cst) volatile noexcept;
193
  bool compare_exchange_strong(T& expected, T desired,
194
- memory_order order = memory_order_seq_cst) noexcept;
195
  ```
196
 
197
- *Requires:* The `failure` argument shall not be `memory_order_release`
198
- nor `memory_order_acq_rel`.
 
 
 
199
 
200
  *Effects:* Retrieves the value in `expected`. It then atomically
201
- compares the contents of the memory pointed to by `this` for equality
202
- with that previously retrieved from `expected`, and if true, replaces
203
- the contents of the memory pointed to by `this` with that in `desired`.
204
- If and only if the comparison is true, memory is affected according to
205
- the value of `success`, and if the comparison is false, memory is
206
- affected according to the value of `failure`. When only one
207
- `memory_order` argument is supplied, the value of `success` is `order`,
208
- and the value of `failure` is `order` except that a value of
209
- `memory_order_acq_rel` shall be replaced by the value
210
- `memory_order_acquire` and a value of `memory_order_release` shall be
211
- replaced by the value `memory_order_relaxed`. If and only if the
212
- comparison is false then, after the atomic operation, the contents of
213
- the memory in `expected` are replaced by the value read from the memory
214
- pointed to by `this` during the atomic comparison. If the operation
215
- returns `true`, these operations are atomic read-modify-write
216
- operations ([[intro.multithread]]) on the memory pointed to by `this`.
217
  Otherwise, these operations are atomic load operations on that memory.
218
 
219
  *Returns:* The result of the comparison.
220
 
221
- [*Note 6*:
222
 
223
- For example, the effect of `compare_exchange_strong` is
 
224
 
225
  ``` cpp
226
  if (memcmp(this, &expected, sizeof(*this)) == 0)
227
  memcpy(this, &desired, sizeof(*this));
228
  else
229
  memcpy(expected, this, sizeof(*this));
230
  ```
231
 
232
  — *end note*]
233
 
234
- [*Example 2*:
235
 
236
  The expected use of the compare-and-exchange operations is as follows.
237
  The compare-and-exchange operations will update `expected` when another
238
  iteration of the loop is needed.
239
 
@@ -244,16 +256,16 @@ do {
244
  } while (!current.compare_exchange_weak(expected, desired));
245
  ```
246
 
247
  — *end example*]
248
 
249
- [*Example 3*:
250
 
251
  Because the expected value is updated only on failure, code releasing
252
- the memory containing the `expected` value on success will work. E.g.
253
- list head insertion will act atomically and would not introduce a data
254
- race in the following code:
255
 
256
  ``` cpp
257
  do {
258
  p->next = head; // make new list node point to the current head
259
  } while (!head.compare_exchange_weak(p->next, p)); // try to insert
@@ -269,90 +281,185 @@ the atomic object.
269
  *Remarks:* A weak compare-and-exchange operation may fail spuriously.
270
  That is, even when the contents of memory referred to by `expected` and
271
  `this` are equal, it may return `false` and store back to `expected` the
272
  same memory contents that were originally there.
273
 
274
- [*Note 7*: This spurious failure enables implementation of
275
  compare-and-exchange on a broader class of machines, e.g., load-locked
276
  store-conditional machines. A consequence of spurious failure is that
277
  nearly all uses of weak compare-and-exchange will be in a loop. When a
278
  compare-and-exchange is in a loop, the weak version will yield better
279
  performance on some platforms. When a weak compare-and-exchange would
280
  require a loop and a strong one would not, the strong one is
281
  preferable. — *end note*]
282
 
283
- [*Note 8*: The `memcpy` and `memcmp` semantics of the
284
- compare-and-exchange operations may result in failed comparisons for
285
- values that compare equal with `operator==` if the underlying type has
286
- padding bits, trap bits, or alternate representations of the same value.
287
- Thus, `compare_exchange_strong` should be used with extreme care. On the
288
- other hand, `compare_exchange_weak` should converge
289
- rapidly. *end note*]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
290
 
291
  ### Specializations for integers <a id="atomics.types.int">[[atomics.types.int]]</a>
292
 
293
- There are specializations of the `atomic` template for the integral
294
- types `char`, `signed char`, `unsigned char`, `short`, `unsigned short`,
295
- `int`, `unsigned int`, `long`, `unsigned long`, `long long`,
296
- `unsigned long long`, `char16_t`, `char32_t`, `wchar_t`, and any other
297
- types needed by the typedefs in the header `<cstdint>`. For each such
298
- integral type `integral`, the specialization `atomic<integral>` provides
299
- additional atomic operations appropriate to integral types.
 
300
 
301
- [*Note 1*: For the specialization `atomic<bool>`, see
302
  [[atomics.types.generic]]. — *end note*]
303
 
304
  ``` cpp
305
  namespace std {
306
  template<> struct atomic<integral> {
307
  using value_type = integral;
308
  using difference_type = value_type;
 
309
  static constexpr bool is_always_lock_free = implementation-defined // whether a given atomic type's operations are always lock free;
310
  bool is_lock_free() const volatile noexcept;
311
  bool is_lock_free() const noexcept;
312
- void store(integral, memory_order = memory_order_seq_cst) volatile noexcept;
313
- void store(integral, memory_order = memory_order_seq_cst) noexcept;
314
- integral load(memory_order = memory_order_seq_cst) const volatile noexcept;
315
- integral load(memory_order = memory_order_seq_cst) const noexcept;
316
- operator integral() const volatile noexcept;
317
- operator integral() const noexcept;
318
- integral exchange(integral, memory_order = memory_order_seq_cst) volatile noexcept;
319
- integral exchange(integral, memory_order = memory_order_seq_cst) noexcept;
320
- bool compare_exchange_weak(integral&, integral,
321
- memory_order, memory_order) volatile noexcept;
322
- bool compare_exchange_weak(integral&, integral,
323
- memory_order, memory_order) noexcept;
324
- bool compare_exchange_strong(integral&, integral,
325
- memory_order, memory_order) volatile noexcept;
326
- bool compare_exchange_strong(integral&, integral,
327
- memory_order, memory_order) noexcept;
328
- bool compare_exchange_weak(integral&, integral,
329
- memory_order = memory_order_seq_cst) volatile noexcept;
330
- bool compare_exchange_weak(integral&, integral,
331
- memory_order = memory_order_seq_cst) noexcept;
332
- bool compare_exchange_strong(integral&, integral,
333
- memory_order = memory_order_seq_cst) volatile noexcept;
334
- bool compare_exchange_strong(integral&, integral,
335
- memory_order = memory_order_seq_cst) noexcept;
336
- integral fetch_add(integral, memory_order = memory_order_seq_cst) volatile noexcept;
337
- integral fetch_add(integral, memory_order = memory_order_seq_cst) noexcept;
338
- integral fetch_sub(integral, memory_order = memory_order_seq_cst) volatile noexcept;
339
- integral fetch_sub(integral, memory_order = memory_order_seq_cst) noexcept;
340
- integral fetch_and(integral, memory_order = memory_order_seq_cst) volatile noexcept;
341
- integral fetch_and(integral, memory_order = memory_order_seq_cst) noexcept;
342
- integral fetch_or(integral, memory_order = memory_order_seq_cst) volatile noexcept;
343
- integral fetch_or(integral, memory_order = memory_order_seq_cst) noexcept;
344
- integral fetch_xor(integral, memory_order = memory_order_seq_cst) volatile noexcept;
345
- integral fetch_xor(integral, memory_order = memory_order_seq_cst) noexcept;
346
 
347
- atomic() noexcept = default;
348
  constexpr atomic(integral) noexcept;
349
  atomic(const atomic&) = delete;
350
  atomic& operator=(const atomic&) = delete;
351
  atomic& operator=(const atomic&) volatile = delete;
 
 
 
352
  integral operator=(integral) volatile noexcept;
353
  integral operator=(integral) noexcept;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
354
 
355
  integral operator++(int) volatile noexcept;
356
  integral operator++(int) noexcept;
357
  integral operator--(int) volatile noexcept;
358
  integral operator--(int) noexcept;
@@ -368,94 +475,256 @@ namespace std {
368
  integral operator&=(integral) noexcept;
369
  integral operator|=(integral) volatile noexcept;
370
  integral operator|=(integral) noexcept;
371
  integral operator^=(integral) volatile noexcept;
372
  integral operator^=(integral) noexcept;
 
 
 
 
 
 
 
373
  };
374
  }
375
  ```
376
 
377
  The atomic integral specializations are standard-layout structs. They
378
- each have a trivial default constructor and a trivial destructor.
379
 
380
  Descriptions are provided below only for members that differ from the
381
  primary template.
382
 
383
  The following operations perform arithmetic computations. The key,
384
  operator, and computation correspondence is:
385
 
386
- **Table: Atomic arithmetic computations** <a id="tab:atomic.arithmetic.computations">[tab:atomic.arithmetic.computations]</a>
387
 
388
  | | | | | | |
389
  | ----- | --- | -------------------- | ----- | --- | -------------------- |
390
  | `add` | `+` | addition | `sub` | `-` | subtraction |
391
  | `or` | `|` | bitwise inclusive or | `xor` | `^` | bitwise exclusive or |
392
  | `and` | `&` | bitwise and | | | |
393
 
394
  ``` cpp
395
- T fetch_key(T operand, memory_order order = memory_order_seq_cst) volatile noexcept;
396
- T fetch_key(T operand, memory_order order = memory_order_seq_cst) noexcept;
397
  ```
398
 
 
 
 
399
  *Effects:* Atomically replaces the value pointed to by `this` with the
400
  result of the computation applied to the value pointed to by `this` and
401
  the given `operand`. Memory is affected according to the value of
402
  `order`. These operations are atomic read-modify-write
403
- operations ([[intro.multithread]]).
404
 
405
  *Returns:* Atomically, the value pointed to by `this` immediately before
406
  the effects.
407
 
408
- *Remarks:* For signed integer types, arithmetic is defined to use two’s
409
- complement representation. There are no undefined results.
 
 
 
 
 
410
 
411
  ``` cpp
412
  T operator op=(T operand) volatile noexcept;
413
  T operator op=(T operand) noexcept;
414
  ```
415
 
 
 
 
416
  *Effects:* Equivalent to:
417
  `return fetch_`*`key`*`(operand) `*`op`*` operand;`
418
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
419
  ### Partial specialization for pointers <a id="atomics.types.pointer">[[atomics.types.pointer]]</a>
420
 
421
  ``` cpp
422
  namespace std {
423
  template<class T> struct atomic<T*> {
424
  using value_type = T*;
425
  using difference_type = ptrdiff_t;
 
426
  static constexpr bool is_always_lock_free = implementation-defined // whether a given atomic type's operations are always lock free;
427
  bool is_lock_free() const volatile noexcept;
428
  bool is_lock_free() const noexcept;
429
- void store(T*, memory_order = memory_order_seq_cst) volatile noexcept;
430
- void store(T*, memory_order = memory_order_seq_cst) noexcept;
431
- T* load(memory_order = memory_order_seq_cst) const volatile noexcept;
432
- T* load(memory_order = memory_order_seq_cst) const noexcept;
 
 
 
 
 
 
 
 
 
433
  operator T*() const volatile noexcept;
434
  operator T*() const noexcept;
435
- T* exchange(T*, memory_order = memory_order_seq_cst) volatile noexcept;
436
- T* exchange(T*, memory_order = memory_order_seq_cst) noexcept;
 
437
  bool compare_exchange_weak(T*&, T*, memory_order, memory_order) volatile noexcept;
438
  bool compare_exchange_weak(T*&, T*, memory_order, memory_order) noexcept;
439
  bool compare_exchange_strong(T*&, T*, memory_order, memory_order) volatile noexcept;
440
  bool compare_exchange_strong(T*&, T*, memory_order, memory_order) noexcept;
441
- bool compare_exchange_weak(T*&, T*, memory_order = memory_order_seq_cst) volatile noexcept;
442
- bool compare_exchange_weak(T*&, T*, memory_order = memory_order_seq_cst) noexcept;
443
- bool compare_exchange_strong(T*&, T*, memory_order = memory_order_seq_cst) volatile noexcept;
444
- bool compare_exchange_strong(T*&, T*, memory_order = memory_order_seq_cst) noexcept;
445
- T* fetch_add(ptrdiff_t, memory_order = memory_order_seq_cst) volatile noexcept;
446
- T* fetch_add(ptrdiff_t, memory_order = memory_order_seq_cst) noexcept;
447
- T* fetch_sub(ptrdiff_t, memory_order = memory_order_seq_cst) volatile noexcept;
448
- T* fetch_sub(ptrdiff_t, memory_order = memory_order_seq_cst) noexcept;
449
 
450
- atomic() noexcept = default;
451
- constexpr atomic(T*) noexcept;
452
- atomic(const atomic&) = delete;
453
- atomic& operator=(const atomic&) = delete;
454
- atomic& operator=(const atomic&) volatile = delete;
455
- T* operator=(T*) volatile noexcept;
456
- T* operator=(T*) noexcept;
457
 
458
  T* operator++(int) volatile noexcept;
459
  T* operator++(int) noexcept;
460
  T* operator--(int) volatile noexcept;
461
  T* operator--(int) noexcept;
@@ -465,47 +734,55 @@ namespace std {
465
  T* operator--() noexcept;
466
  T* operator+=(ptrdiff_t) volatile noexcept;
467
  T* operator+=(ptrdiff_t) noexcept;
468
  T* operator-=(ptrdiff_t) volatile noexcept;
469
  T* operator-=(ptrdiff_t) noexcept;
 
 
 
 
 
 
 
470
  };
471
  }
472
  ```
473
 
474
  There is a partial specialization of the `atomic` class template for
475
  pointers. Specializations of this partial specialization are
476
- standard-layout structs. They each have a trivial default constructor
477
- and a trivial destructor.
478
 
479
  Descriptions are provided below only for members that differ from the
480
  primary template.
481
 
482
  The following operations perform pointer arithmetic. The key, operator,
483
  and computation correspondence is:
484
 
485
- **Table: Atomic pointer computations** <a id="tab:atomic.pointer.computations">[tab:atomic.pointer.computations]</a>
486
 
487
  | | | | | | |
488
  | ----- | --- | -------- | ----- | --- | ----------- |
489
  | `add` | `+` | addition | `sub` | `-` | subtraction |
490
 
491
  ``` cpp
492
- T* fetch_key(ptrdiff_t operand, memory_order order = memory_order_seq_cst) volatile noexcept;
493
- T* fetch_key(ptrdiff_t operand, memory_order order = memory_order_seq_cst) noexcept;
494
  ```
495
 
496
- *Requires:* T shall be an object type, otherwise the program is
497
- ill-formed.
 
 
498
 
499
  [*Note 1*: Pointer arithmetic on `void*` or function pointers is
500
  ill-formed. — *end note*]
501
 
502
  *Effects:* Atomically replaces the value pointed to by `this` with the
503
  result of the computation applied to the value pointed to by `this` and
504
  the given `operand`. Memory is affected according to the value of
505
  `order`. These operations are atomic read-modify-write
506
- operations ([[intro.multithread]]).
507
 
508
  *Returns:* Atomically, the value pointed to by `this` immediately before
509
  the effects.
510
 
511
  *Remarks:* The result may be an undefined address, but the operations
@@ -514,38 +791,521 @@ otherwise have no undefined behavior.
514
  ``` cpp
515
  T* operator op=(ptrdiff_t operand) volatile noexcept;
516
  T* operator op=(ptrdiff_t operand) noexcept;
517
  ```
518
 
 
 
 
519
  *Effects:* Equivalent to:
520
  `return fetch_`*`key`*`(operand) `*`op`*` operand;`
521
 
522
  ### Member operators common to integers and pointers to objects <a id="atomics.types.memop">[[atomics.types.memop]]</a>
523
 
524
  ``` cpp
525
- T operator++(int) volatile noexcept;
526
- T operator++(int) noexcept;
527
  ```
528
 
 
 
 
529
  *Effects:* Equivalent to: `return fetch_add(1);`
530
 
531
  ``` cpp
532
- T operator--(int) volatile noexcept;
533
- T operator--(int) noexcept;
534
  ```
535
 
 
 
 
536
  *Effects:* Equivalent to: `return fetch_sub(1);`
537
 
538
  ``` cpp
539
- T operator++() volatile noexcept;
540
- T operator++() noexcept;
541
  ```
542
 
 
 
 
543
  *Effects:* Equivalent to: `return fetch_add(1) + 1;`
544
 
545
  ``` cpp
546
- T operator--() volatile noexcept;
547
- T operator--() noexcept;
548
  ```
549
 
 
 
 
550
  *Effects:* Equivalent to: `return fetch_sub(1) - 1;`
551
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
  ``` cpp
4
  namespace std {
5
  template<class T> struct atomic {
6
  using value_type = T;
7
+
8
  static constexpr bool is_always_lock_free = implementation-defined // whether a given atomic type's operations are always lock free;
9
  bool is_lock_free() const volatile noexcept;
10
  bool is_lock_free() const noexcept;
11
+
12
+ // [atomics.types.operations], operations on atomic types
13
+ constexpr atomic() noexcept(is_nothrow_default_constructible_v<T>);
14
+ constexpr atomic(T) noexcept;
15
+ atomic(const atomic&) = delete;
16
+ atomic& operator=(const atomic&) = delete;
17
+ atomic& operator=(const atomic&) volatile = delete;
18
+
19
+ T load(memory_order = memory_order::seq_cst) const volatile noexcept;
20
+ T load(memory_order = memory_order::seq_cst) const noexcept;
21
  operator T() const volatile noexcept;
22
  operator T() const noexcept;
23
+ void store(T, memory_order = memory_order::seq_cst) volatile noexcept;
24
+ void store(T, memory_order = memory_order::seq_cst) noexcept;
25
+ T operator=(T) volatile noexcept;
26
+ T operator=(T) noexcept;
27
+
28
+ T exchange(T, memory_order = memory_order::seq_cst) volatile noexcept;
29
+ T exchange(T, memory_order = memory_order::seq_cst) noexcept;
30
  bool compare_exchange_weak(T&, T, memory_order, memory_order) volatile noexcept;
31
  bool compare_exchange_weak(T&, T, memory_order, memory_order) noexcept;
32
  bool compare_exchange_strong(T&, T, memory_order, memory_order) volatile noexcept;
33
  bool compare_exchange_strong(T&, T, memory_order, memory_order) noexcept;
34
+ bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) volatile noexcept;
35
+ bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) noexcept;
36
+ bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) volatile noexcept;
37
+ bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) noexcept;
38
 
39
+ void wait(T, memory_order = memory_order::seq_cst) const volatile noexcept;
40
+ void wait(T, memory_order = memory_order::seq_cst) const noexcept;
41
+ void notify_one() volatile noexcept;
42
+ void notify_one() noexcept;
43
+ void notify_all() volatile noexcept;
44
+ void notify_all() noexcept;
 
45
  };
46
  }
47
  ```
48
 
49
+ The template argument for `T` shall meet the *Cpp17CopyConstructible*
50
+ and *Cpp17CopyAssignable* requirements. The program is ill-formed if any
51
+ of
52
+
53
+ - `is_trivially_copyable_v<T>`,
54
+ - `is_copy_constructible_v<T>`,
55
+ - `is_move_constructible_v<T>`,
56
+ - `is_copy_assignable_v<T>`, or
57
+ - `is_move_assignable_v<T>`
58
+
59
+ is `false`.
60
 
61
  [*Note 1*: Type arguments that are not also statically initializable
62
  may be difficult to use. — *end note*]
63
 
64
  The specialization `atomic<bool>` is a standard-layout struct.
65
 
66
  [*Note 2*: The representation of an atomic specialization need not have
67
+ the same size and alignment requirement as its corresponding argument
68
+ type. *end note*]
 
69
 
70
  ### Operations on atomic types <a id="atomics.types.operations">[[atomics.types.operations]]</a>
71
 
 
 
 
 
 
 
72
  ``` cpp
73
+ constexpr atomic() noexcept(is_nothrow_default_constructible_v<T>);
74
  ```
75
 
76
+ *Mandates:* `is_default_constructible_v<T>` is `true`.
77
 
78
+ *Effects:* Initializes the atomic object with the value of `T()`.
79
+ Initialization is not an atomic operation [[intro.multithread]].
80
 
81
  ``` cpp
82
  constexpr atomic(T desired) noexcept;
83
  ```
84
 
85
  *Effects:* Initializes the object with the value `desired`.
86
+ Initialization is not an atomic operation [[intro.multithread]].
87
 
88
+ [*Note 1*: It is possible to have an access to an atomic object `A`
89
  race with its construction, for example by communicating the address of
90
  the just-constructed object `A` to another thread via
91
+ `memory_order::relaxed` operations on a suitable atomic pointer
92
+ variable, and then immediately accessing `A` in the receiving thread.
93
+ This results in undefined behavior. — *end note*]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
 
95
  ``` cpp
96
  static constexpr bool is_always_lock_free = implementation-defined // whether a given atomic type's operations are always lock free;
97
  ```
98
 
99
  The `static` data member `is_always_lock_free` is `true` if the atomic
100
  type’s operations are always lock-free, and `false` otherwise.
101
 
102
+ [*Note 2*: The value of `is_always_lock_free` is consistent with the
103
  value of the corresponding `ATOMIC_..._LOCK_FREE` macro, if
104
  defined. — *end note*]
105
 
106
  ``` cpp
107
  bool is_lock_free() const volatile noexcept;
 
109
  ```
110
 
111
  *Returns:* `true` if the object’s operations are lock-free, `false`
112
  otherwise.
113
 
114
+ [*Note 3*: The return value of the `is_lock_free` member function is
115
  consistent with the value of `is_always_lock_free` for the same
116
  type. — *end note*]
117
 
118
  ``` cpp
119
+ void store(T desired, memory_order order = memory_order::seq_cst) volatile noexcept;
120
+ void store(T desired, memory_order order = memory_order::seq_cst) noexcept;
121
  ```
122
 
123
+ *Preconditions:* The `order` argument is neither
124
+ `memory_order::consume`, `memory_order::acquire`, nor
125
+ `memory_order::acq_rel`.
126
+
127
+ *Constraints:* For the `volatile` overload of this function,
128
+ `is_always_lock_free` is `true`.
129
 
130
  *Effects:* Atomically replaces the value pointed to by `this` with the
131
  value of `desired`. Memory is affected according to the value of
132
  `order`.
133
 
134
  ``` cpp
135
  T operator=(T desired) volatile noexcept;
136
  T operator=(T desired) noexcept;
137
  ```
138
 
139
+ *Constraints:* For the `volatile` overload of this function,
140
+ `is_always_lock_free` is `true`.
141
+
142
+ *Effects:* Equivalent to `store(desired)`.
143
 
144
  *Returns:* `desired`.
145
 
146
  ``` cpp
147
+ T load(memory_order order = memory_order::seq_cst) const volatile noexcept;
148
+ T load(memory_order order = memory_order::seq_cst) const noexcept;
149
  ```
150
 
151
+ *Preconditions:* The `order` argument is neither `memory_order::release`
152
+ nor `memory_order::acq_rel`.
153
+
154
+ *Constraints:* For the `volatile` overload of this function,
155
+ `is_always_lock_free` is `true`.
156
 
157
  *Effects:* Memory is affected according to the value of `order`.
158
 
159
  *Returns:* Atomically returns the value pointed to by `this`.
160
 
161
  ``` cpp
162
  operator T() const volatile noexcept;
163
  operator T() const noexcept;
164
  ```
165
 
166
+ *Constraints:* For the `volatile` overload of this function,
167
+ `is_always_lock_free` is `true`.
168
+
169
  *Effects:* Equivalent to: `return load();`
170
 
171
  ``` cpp
172
+ T exchange(T desired, memory_order order = memory_order::seq_cst) volatile noexcept;
173
+ T exchange(T desired, memory_order order = memory_order::seq_cst) noexcept;
174
  ```
175
 
176
+ *Constraints:* For the `volatile` overload of this function,
177
+ `is_always_lock_free` is `true`.
178
+
179
  *Effects:* Atomically replaces the value pointed to by `this` with
180
  `desired`. Memory is affected according to the value of `order`. These
181
  operations are atomic read-modify-write
182
+ operations [[intro.multithread]].
183
 
184
  *Returns:* Atomically returns the value pointed to by `this` immediately
185
  before the effects.
186
 
187
  ``` cpp
 
192
  bool compare_exchange_strong(T& expected, T desired,
193
  memory_order success, memory_order failure) volatile noexcept;
194
  bool compare_exchange_strong(T& expected, T desired,
195
  memory_order success, memory_order failure) noexcept;
196
  bool compare_exchange_weak(T& expected, T desired,
197
+ memory_order order = memory_order::seq_cst) volatile noexcept;
198
  bool compare_exchange_weak(T& expected, T desired,
199
+ memory_order order = memory_order::seq_cst) noexcept;
200
  bool compare_exchange_strong(T& expected, T desired,
201
+ memory_order order = memory_order::seq_cst) volatile noexcept;
202
  bool compare_exchange_strong(T& expected, T desired,
203
+ memory_order order = memory_order::seq_cst) noexcept;
204
  ```
205
 
206
+ *Preconditions:* The `failure` argument is neither
207
+ `memory_order::release` nor `memory_order::acq_rel`.
208
+
209
+ *Constraints:* For the `volatile` overload of this function,
210
+ `is_always_lock_free` is `true`.
211
 
212
  *Effects:* Retrieves the value in `expected`. It then atomically
213
+ compares the value representation of the value pointed to by `this` for
214
+ equality with that previously retrieved from `expected`, and if true,
215
+ replaces the value pointed to by `this` with that in `desired`. If and
216
+ only if the comparison is `true`, memory is affected according to the
217
+ value of `success`, and if the comparison is false, memory is affected
218
+ according to the value of `failure`. When only one `memory_order`
219
+ argument is supplied, the value of `success` is `order`, and the value
220
+ of `failure` is `order` except that a value of `memory_order::acq_rel`
221
+ shall be replaced by the value `memory_order::acquire` and a value of
222
+ `memory_order::release` shall be replaced by the value
223
+ `memory_order::relaxed`. If and only if the comparison is false then,
224
+ after the atomic operation, the value in `expected` is replaced by the
225
+ value pointed to by `this` during the atomic comparison. If the
226
+ operation returns `true`, these operations are atomic read-modify-write
227
+ operations [[intro.multithread]] on the memory pointed to by `this`.
 
228
  Otherwise, these operations are atomic load operations on that memory.
229
 
230
  *Returns:* The result of the comparison.
231
 
232
+ [*Note 4*:
233
 
234
+ For example, the effect of `compare_exchange_strong` on objects without
235
+ padding bits [[basic.types]] is
236
 
237
  ``` cpp
238
  if (memcmp(this, &expected, sizeof(*this)) == 0)
239
  memcpy(this, &desired, sizeof(*this));
240
  else
241
  memcpy(expected, this, sizeof(*this));
242
  ```
243
 
244
  — *end note*]
245
 
246
+ [*Example 1*:
247
 
248
  The expected use of the compare-and-exchange operations is as follows.
249
  The compare-and-exchange operations will update `expected` when another
250
  iteration of the loop is needed.
251
 
 
256
  } while (!current.compare_exchange_weak(expected, desired));
257
  ```
258
 
259
  — *end example*]
260
 
261
+ [*Example 2*:
262
 
263
  Because the expected value is updated only on failure, code releasing
264
+ the memory containing the `expected` value on success will work. For
265
+ example, list head insertion will act atomically and would not introduce
266
+ a data race in the following code:
267
 
268
  ``` cpp
269
  do {
270
  p->next = head; // make new list node point to the current head
271
  } while (!head.compare_exchange_weak(p->next, p)); // try to insert
 
281
  *Remarks:* A weak compare-and-exchange operation may fail spuriously.
282
  That is, even when the contents of memory referred to by `expected` and
283
  `this` are equal, it may return `false` and store back to `expected` the
284
  same memory contents that were originally there.
285
 
286
+ [*Note 5*: This spurious failure enables implementation of
287
  compare-and-exchange on a broader class of machines, e.g., load-locked
288
  store-conditional machines. A consequence of spurious failure is that
289
  nearly all uses of weak compare-and-exchange will be in a loop. When a
290
  compare-and-exchange is in a loop, the weak version will yield better
291
  performance on some platforms. When a weak compare-and-exchange would
292
  require a loop and a strong one would not, the strong one is
293
  preferable. — *end note*]
294
 
295
+ [*Note 6*: Under cases where the `memcpy` and `memcmp` semantics of the
296
+ compare-and-exchange operations apply, the outcome might be failed
297
+ comparisons for values that compare equal with `operator==` if the value
298
+ representation has trap bits or alternate representations of the same
299
+ value. Notably, on implementations conforming to ISO/IEC/IEEE 60559,
300
+ floating-point `-0.0` and `+0.0` will not compare equal with `memcmp`
301
+ but will compare equal with `operator==`, and NaNs with the same payload
302
+ will compare equal with `memcmp` but will not compare equal with
303
+ `operator==`. — *end note*]
304
+
305
+ [*Note 7*:
306
+
307
+ Because compare-and-exchange acts on an object’s value representation,
308
+ padding bits that never participate in the object’s value representation
309
+ are ignored. As a consequence, the following code is guaranteed to avoid
310
+ spurious failure:
311
+
312
+ ``` cpp
313
+ struct padded {
314
+ char clank = 0x42;
315
+ // Padding here.
316
+ unsigned biff = 0xC0DEFEFE;
317
+ };
318
+ atomic<padded> pad = {};
319
+
320
+ bool zap() {
321
+ padded expected, desired{0, 0};
322
+ return pad.compare_exchange_strong(expected, desired);
323
+ }
324
+ ```
325
+
326
+ — *end note*]
327
+
328
+ [*Note 8*:
329
+
330
+ For a union with bits that participate in the value representation of
331
+ some members but not others, compare-and-exchange might always fail.
332
+ This is because such padding bits have an indeterminate value when they
333
+ do not participate in the value representation of the active member. As
334
+ a consequence, the following code is not guaranteed to ever succeed:
335
+
336
+ ``` cpp
337
+ union pony {
338
+ double celestia = 0.;
339
+ short luna; // padded
340
+ };
341
+ atomic<pony> princesses = {};
342
+
343
+ bool party(pony desired) {
344
+ pony expected;
345
+ return princesses.compare_exchange_strong(expected, desired);
346
+ }
347
+ ```
348
+
349
+ — *end note*]
350
+
351
+ ``` cpp
352
+ void wait(T old, memory_order order = memory_order::seq_cst) const volatile noexcept;
353
+ void wait(T old, memory_order order = memory_order::seq_cst) const noexcept;
354
+ ```
355
+
356
+ *Preconditions:* `order` is neither `memory_order::release` nor
357
+ `memory_order::acq_rel`.
358
+
359
+ *Effects:* Repeatedly performs the following steps, in order:
360
+
361
+ - Evaluates `load(order)` and compares its value representation for
362
+ equality against that of `old`.
363
+ - If they compare unequal, returns.
364
+ - Blocks until it is unblocked by an atomic notifying operation or is
365
+ unblocked spuriously.
366
+
367
+ *Remarks:* This function is an atomic waiting
368
+ operation [[atomics.wait]].
369
+
370
+ ``` cpp
371
+ void notify_one() volatile noexcept;
372
+ void notify_one() noexcept;
373
+ ```
374
+
375
+ *Effects:* Unblocks the execution of at least one atomic waiting
376
+ operation that is eligible to be unblocked [[atomics.wait]] by this
377
+ call, if any such atomic waiting operations exist.
378
+
379
+ *Remarks:* This function is an atomic notifying
380
+ operation [[atomics.wait]].
381
+
382
+ ``` cpp
383
+ void notify_all() volatile noexcept;
384
+ void notify_all() noexcept;
385
+ ```
386
+
387
+ *Effects:* Unblocks the execution of all atomic waiting operations that
388
+ are eligible to be unblocked [[atomics.wait]] by this call.
389
+
390
+ *Remarks:* This function is an atomic notifying
391
+ operation [[atomics.wait]].
392
 
393
  ### Specializations for integers <a id="atomics.types.int">[[atomics.types.int]]</a>
394
 
395
+ There are specializations of the `atomic` class template for the
396
+ integral types `char`, `signed char`, `unsigned char`, `short`,
397
+ `unsigned short`, `int`, `unsigned int`, `long`, `unsigned long`,
398
+ `long long`, `unsigned long long`, `char8_t`, `char16_t`, `char32_t`,
399
+ `wchar_t`, and any other types needed by the typedefs in the header
400
+ `<cstdint>`. For each such type `integral`, the specialization
401
+ `atomic<integral>` provides additional atomic operations appropriate to
402
+ integral types.
403
 
404
+ [*Note 1*: The specialization `atomic<bool>` uses the primary template
405
  [[atomics.types.generic]]. — *end note*]
406
 
407
  ``` cpp
408
  namespace std {
409
  template<> struct atomic<integral> {
410
  using value_type = integral;
411
  using difference_type = value_type;
412
+
413
  static constexpr bool is_always_lock_free = implementation-defined // whether a given atomic type's operations are always lock free;
414
  bool is_lock_free() const volatile noexcept;
415
  bool is_lock_free() const noexcept;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
416
 
417
+ constexpr atomic() noexcept;
418
  constexpr atomic(integral) noexcept;
419
  atomic(const atomic&) = delete;
420
  atomic& operator=(const atomic&) = delete;
421
  atomic& operator=(const atomic&) volatile = delete;
422
+
423
+ void store(integral, memory_order = memory_order::seq_cst) volatile noexcept;
424
+ void store(integral, memory_order = memory_order::seq_cst) noexcept;
425
  integral operator=(integral) volatile noexcept;
426
  integral operator=(integral) noexcept;
427
+ integral load(memory_order = memory_order::seq_cst) const volatile noexcept;
428
+ integral load(memory_order = memory_order::seq_cst) const noexcept;
429
+ operator integral() const volatile noexcept;
430
+ operator integral() const noexcept;
431
+
432
+ integral exchange(integral, memory_order = memory_order::seq_cst) volatile noexcept;
433
+ integral exchange(integral, memory_order = memory_order::seq_cst) noexcept;
434
+ bool compare_exchange_weak(integral&, integral,
435
+ memory_order, memory_order) volatile noexcept;
436
+ bool compare_exchange_weak(integral&, integral,
437
+ memory_order, memory_order) noexcept;
438
+ bool compare_exchange_strong(integral&, integral,
439
+ memory_order, memory_order) volatile noexcept;
440
+ bool compare_exchange_strong(integral&, integral,
441
+ memory_order, memory_order) noexcept;
442
+ bool compare_exchange_weak(integral&, integral,
443
+ memory_order = memory_order::seq_cst) volatile noexcept;
444
+ bool compare_exchange_weak(integral&, integral,
445
+ memory_order = memory_order::seq_cst) noexcept;
446
+ bool compare_exchange_strong(integral&, integral,
447
+ memory_order = memory_order::seq_cst) volatile noexcept;
448
+ bool compare_exchange_strong(integral&, integral,
449
+ memory_order = memory_order::seq_cst) noexcept;
450
+
451
+ integral fetch_add(integral, memory_order = memory_order::seq_cst) volatile noexcept;
452
+ integral fetch_add(integral, memory_order = memory_order::seq_cst) noexcept;
453
+ integral fetch_sub(integral, memory_order = memory_order::seq_cst) volatile noexcept;
454
+ integral fetch_sub(integral, memory_order = memory_order::seq_cst) noexcept;
455
+ integral fetch_and(integral, memory_order = memory_order::seq_cst) volatile noexcept;
456
+ integral fetch_and(integral, memory_order = memory_order::seq_cst) noexcept;
457
+ integral fetch_or(integral, memory_order = memory_order::seq_cst) volatile noexcept;
458
+ integral fetch_or(integral, memory_order = memory_order::seq_cst) noexcept;
459
+ integral fetch_xor(integral, memory_order = memory_order::seq_cst) volatile noexcept;
460
+ integral fetch_xor(integral, memory_order = memory_order::seq_cst) noexcept;
461
 
462
  integral operator++(int) volatile noexcept;
463
  integral operator++(int) noexcept;
464
  integral operator--(int) volatile noexcept;
465
  integral operator--(int) noexcept;
 
475
  integral operator&=(integral) noexcept;
476
  integral operator|=(integral) volatile noexcept;
477
  integral operator|=(integral) noexcept;
478
  integral operator^=(integral) volatile noexcept;
479
  integral operator^=(integral) noexcept;
480
+
481
+ void wait(integral, memory_order = memory_order::seq_cst) const volatile noexcept;
482
+ void wait(integral, memory_order = memory_order::seq_cst) const noexcept;
483
+ void notify_one() volatile noexcept;
484
+ void notify_one() noexcept;
485
+ void notify_all() volatile noexcept;
486
+ void notify_all() noexcept;
487
  };
488
  }
489
  ```
490
 
491
  The atomic integral specializations are standard-layout structs. They
492
+ each have a trivial destructor.
493
 
494
  Descriptions are provided below only for members that differ from the
495
  primary template.
496
 
497
  The following operations perform arithmetic computations. The key,
498
  operator, and computation correspondence is:
499
 
500
+ **Table: Atomic arithmetic computations** <a id="atomic.types.int.comp">[atomic.types.int.comp]</a>
501
 
502
  | | | | | | |
503
  | ----- | --- | -------------------- | ----- | --- | -------------------- |
504
  | `add` | `+` | addition | `sub` | `-` | subtraction |
505
  | `or` | `|` | bitwise inclusive or | `xor` | `^` | bitwise exclusive or |
506
  | `and` | `&` | bitwise and | | | |
507
 
508
  ``` cpp
509
+ T fetch_key(T operand, memory_order order = memory_order::seq_cst) volatile noexcept;
510
+ T fetch_key(T operand, memory_order order = memory_order::seq_cst) noexcept;
511
  ```
512
 
513
+ *Constraints:* For the `volatile` overload of this function,
514
+ `is_always_lock_free` is `true`.
515
+
516
  *Effects:* Atomically replaces the value pointed to by `this` with the
517
  result of the computation applied to the value pointed to by `this` and
518
  the given `operand`. Memory is affected according to the value of
519
  `order`. These operations are atomic read-modify-write
520
+ operations [[intro.multithread]].
521
 
522
  *Returns:* Atomically, the value pointed to by `this` immediately before
523
  the effects.
524
 
525
+ *Remarks:* For signed integer types, the result is as if the object
526
+ value and parameters were converted to their corresponding unsigned
527
+ types, the computation performed on those types, and the result
528
+ converted back to the signed type.
529
+
530
+ [*Note 1*: There are no undefined results arising from the
531
+ computation. — *end note*]
532
 
533
  ``` cpp
534
  T operator op=(T operand) volatile noexcept;
535
  T operator op=(T operand) noexcept;
536
  ```
537
 
538
+ *Constraints:* For the `volatile` overload of this function,
539
+ `is_always_lock_free` is `true`.
540
+
541
  *Effects:* Equivalent to:
542
  `return fetch_`*`key`*`(operand) `*`op`*` operand;`
543
 
544
+ ### Specializations for floating-point types <a id="atomics.types.float">[[atomics.types.float]]</a>
545
+
546
+ There are specializations of the `atomic` class template for the
547
+ floating-point types `float`, `double`, and `long double`. For each such
548
+ type `floating-point`, the specialization `atomic<floating-point>`
549
+ provides additional atomic operations appropriate to floating-point
550
+ types.
551
+
552
+ ``` cpp
553
+ namespace std {
554
+ template<> struct atomic<floating-point> {
555
+ using value_type = floating-point;
556
+ using difference_type = value_type;
557
+
558
+ static constexpr bool is_always_lock_free = implementation-defined // whether a given atomic type's operations are always lock free;
559
+ bool is_lock_free() const volatile noexcept;
560
+ bool is_lock_free() const noexcept;
561
+
562
+ constexpr atomic() noexcept;
563
+ constexpr atomic(floating-point) noexcept;
564
+ atomic(const atomic&) = delete;
565
+ atomic& operator=(const atomic&) = delete;
566
+ atomic& operator=(const atomic&) volatile = delete;
567
+
568
+ void store(floating-point, memory_order = memory_order::seq_cst) volatile noexcept;
569
+ void store(floating-point, memory_order = memory_order::seq_cst) noexcept;
570
+ floating-point operator=(floating-point) volatile noexcept;
571
+ floating-point operator=(floating-point) noexcept;
572
+ floating-point load(memory_order = memory_order::seq_cst) volatile noexcept;
573
+ floating-point load(memory_order = memory_order::seq_cst) noexcept;
574
+ operator floating-point() volatile noexcept;
575
+ operator floating-point() noexcept;
576
+
577
+ floating-point exchange(floating-point,
578
+ memory_order = memory_order::seq_cst) volatile noexcept;
579
+ floating-point exchange(floating-point,
580
+ memory_order = memory_order::seq_cst) noexcept;
581
+ bool compare_exchange_weak(floating-point&, floating-point,
582
+ memory_order, memory_order) volatile noexcept;
583
+ bool compare_exchange_weak(floating-point&, floating-point,
584
+ memory_order, memory_order) noexcept;
585
+ bool compare_exchange_strong(floating-point&, floating-point,
586
+ memory_order, memory_order) volatile noexcept;
587
+ bool compare_exchange_strong(floating-point&, floating-point,
588
+ memory_order, memory_order) noexcept;
589
+ bool compare_exchange_weak(floating-point&, floating-point,
590
+ memory_order = memory_order::seq_cst) volatile noexcept;
591
+ bool compare_exchange_weak(floating-point&, floating-point,
592
+ memory_order = memory_order::seq_cst) noexcept;
593
+ bool compare_exchange_strong(floating-point&, floating-point,
594
+ memory_order = memory_order::seq_cst) volatile noexcept;
595
+ bool compare_exchange_strong(floating-point&, floating-point,
596
+ memory_order = memory_order::seq_cst) noexcept;
597
+
598
+ floating-point fetch_add(floating-point,
599
+ memory_order = memory_order::seq_cst) volatile noexcept;
600
+ floating-point fetch_add(floating-point,
601
+ memory_order = memory_order::seq_cst) noexcept;
602
+ floating-point fetch_sub(floating-point,
603
+ memory_order = memory_order::seq_cst) volatile noexcept;
604
+ floating-point fetch_sub(floating-point,
605
+ memory_order = memory_order::seq_cst) noexcept;
606
+
607
+ floating-point operator+=(floating-point) volatile noexcept;
608
+ floating-point operator+=(floating-point) noexcept;
609
+ floating-point operator-=(floating-point) volatile noexcept;
610
+ floating-point operator-=(floating-point) noexcept;
611
+
612
+ void wait(floating-point, memory_order = memory_order::seq_cst) const volatile noexcept;
613
+ void wait(floating-point, memory_order = memory_order::seq_cst) const noexcept;
614
+ void notify_one() volatile noexcept;
615
+ void notify_one() noexcept;
616
+ void notify_all() volatile noexcept;
617
+ void notify_all() noexcept;
618
+ };
619
+ }
620
+ ```
621
+
622
+ The atomic floating-point specializations are standard-layout structs.
623
+ They each have a trivial destructor.
624
+
625
+ Descriptions are provided below only for members that differ from the
626
+ primary template.
627
+
628
+ The following operations perform arithmetic addition and subtraction
629
+ computations. The key, operator, and computation correspondence are
630
+ identified in [[atomic.types.int.comp]].
631
+
632
+ ``` cpp
633
+ T fetch_key(T operand, memory_order order = memory_order::seq_cst) volatile noexcept;
634
+ T fetch_key(T operand, memory_order order = memory_order::seq_cst) noexcept;
635
+ ```
636
+
637
+ *Constraints:* For the `volatile` overload of this function,
638
+ `is_always_lock_free` is `true`.
639
+
640
+ *Effects:* Atomically replaces the value pointed to by `this` with the
641
+ result of the computation applied to the value pointed to by `this` and
642
+ the given `operand`. Memory is affected according to the value of
643
+ `order`. These operations are atomic read-modify-write
644
+ operations [[intro.multithread]].
645
+
646
+ *Returns:* Atomically, the value pointed to by `this` immediately before
647
+ the effects.
648
+
649
+ *Remarks:* If the result is not a representable value for its
650
+ type [[expr.pre]] the result is unspecified, but the operations
651
+ otherwise have no undefined behavior. Atomic arithmetic operations on
652
+ *`floating-point`* should conform to the
653
+ `std::numeric_limits<`*`floating-point`*`>` traits associated with the
654
+ floating-point type [[limits.syn]]. The floating-point
655
+ environment [[cfenv]] for atomic arithmetic operations on
656
+ *`floating-point`* may be different than the calling thread’s
657
+ floating-point environment.
658
+
659
+ ``` cpp
660
+ T operator op=(T operand) volatile noexcept;
661
+ T operator op=(T operand) noexcept;
662
+ ```
663
+
664
+ *Constraints:* For the `volatile` overload of this function,
665
+ `is_always_lock_free` is `true`.
666
+
667
+ *Effects:* Equivalent to:
668
+ `return fetch_`*`key`*`(operand) `*`op`*` operand;`
669
+
670
+ *Remarks:* If the result is not a representable value for its
671
+ type [[expr.pre]] the result is unspecified, but the operations
672
+ otherwise have no undefined behavior. Atomic arithmetic operations on
673
+ *`floating-point`* should conform to the
674
+ `std::numeric_limits<`*`floating-point`*`>` traits associated with the
675
+ floating-point type [[limits.syn]]. The floating-point
676
+ environment [[cfenv]] for atomic arithmetic operations on
677
+ *`floating-point`* may be different than the calling thread’s
678
+ floating-point environment.
679
+
680
  ### Partial specialization for pointers <a id="atomics.types.pointer">[[atomics.types.pointer]]</a>
681
 
682
  ``` cpp
683
  namespace std {
684
  template<class T> struct atomic<T*> {
685
  using value_type = T*;
686
  using difference_type = ptrdiff_t;
687
+
688
  static constexpr bool is_always_lock_free = implementation-defined // whether a given atomic type's operations are always lock free;
689
  bool is_lock_free() const volatile noexcept;
690
  bool is_lock_free() const noexcept;
691
+
692
+ constexpr atomic() noexcept;
693
+ constexpr atomic(T*) noexcept;
694
+ atomic(const atomic&) = delete;
695
+ atomic& operator=(const atomic&) = delete;
696
+ atomic& operator=(const atomic&) volatile = delete;
697
+
698
+ void store(T*, memory_order = memory_order::seq_cst) volatile noexcept;
699
+ void store(T*, memory_order = memory_order::seq_cst) noexcept;
700
+ T* operator=(T*) volatile noexcept;
701
+ T* operator=(T*) noexcept;
702
+ T* load(memory_order = memory_order::seq_cst) const volatile noexcept;
703
+ T* load(memory_order = memory_order::seq_cst) const noexcept;
704
  operator T*() const volatile noexcept;
705
  operator T*() const noexcept;
706
+
707
+ T* exchange(T*, memory_order = memory_order::seq_cst) volatile noexcept;
708
+ T* exchange(T*, memory_order = memory_order::seq_cst) noexcept;
709
  bool compare_exchange_weak(T*&, T*, memory_order, memory_order) volatile noexcept;
710
  bool compare_exchange_weak(T*&, T*, memory_order, memory_order) noexcept;
711
  bool compare_exchange_strong(T*&, T*, memory_order, memory_order) volatile noexcept;
712
  bool compare_exchange_strong(T*&, T*, memory_order, memory_order) noexcept;
713
+ bool compare_exchange_weak(T*&, T*,
714
+ memory_order = memory_order::seq_cst) volatile noexcept;
715
+ bool compare_exchange_weak(T*&, T*,
716
+ memory_order = memory_order::seq_cst) noexcept;
717
+ bool compare_exchange_strong(T*&, T*,
718
+ memory_order = memory_order::seq_cst) volatile noexcept;
719
+ bool compare_exchange_strong(T*&, T*,
720
+ memory_order = memory_order::seq_cst) noexcept;
721
 
722
+ T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;
723
+ T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept;
724
+ T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;
725
+ T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept;
 
 
 
726
 
727
  T* operator++(int) volatile noexcept;
728
  T* operator++(int) noexcept;
729
  T* operator--(int) volatile noexcept;
730
  T* operator--(int) noexcept;
 
734
  T* operator--() noexcept;
735
  T* operator+=(ptrdiff_t) volatile noexcept;
736
  T* operator+=(ptrdiff_t) noexcept;
737
  T* operator-=(ptrdiff_t) volatile noexcept;
738
  T* operator-=(ptrdiff_t) noexcept;
739
+
740
+ void wait(T*, memory_order = memory_order::seq_cst) const volatile noexcept;
741
+ void wait(T*, memory_order = memory_order::seq_cst) const noexcept;
742
+ void notify_one() volatile noexcept;
743
+ void notify_one() noexcept;
744
+ void notify_all() volatile noexcept;
745
+ void notify_all() noexcept;
746
  };
747
  }
748
  ```
749
 
750
  There is a partial specialization of the `atomic` class template for
751
  pointers. Specializations of this partial specialization are
752
+ standard-layout structs. They each have a trivial destructor.
 
753
 
754
  Descriptions are provided below only for members that differ from the
755
  primary template.
756
 
757
  The following operations perform pointer arithmetic. The key, operator,
758
  and computation correspondence is:
759
 
760
+ **Table: Atomic pointer computations** <a id="atomic.types.pointer.comp">[atomic.types.pointer.comp]</a>
761
 
762
  | | | | | | |
763
  | ----- | --- | -------- | ----- | --- | ----------- |
764
  | `add` | `+` | addition | `sub` | `-` | subtraction |
765
 
766
  ``` cpp
767
+ T* fetch_key(ptrdiff_t operand, memory_order order = memory_order::seq_cst) volatile noexcept;
768
+ T* fetch_key(ptrdiff_t operand, memory_order order = memory_order::seq_cst) noexcept;
769
  ```
770
 
771
+ *Constraints:* For the `volatile` overload of this function,
772
+ `is_always_lock_free` is `true`.
773
+
774
+ *Mandates:* `T` is a complete object type.
775
 
776
  [*Note 1*: Pointer arithmetic on `void*` or function pointers is
777
  ill-formed. — *end note*]
778
 
779
  *Effects:* Atomically replaces the value pointed to by `this` with the
780
  result of the computation applied to the value pointed to by `this` and
781
  the given `operand`. Memory is affected according to the value of
782
  `order`. These operations are atomic read-modify-write
783
+ operations [[intro.multithread]].
784
 
785
  *Returns:* Atomically, the value pointed to by `this` immediately before
786
  the effects.
787
 
788
  *Remarks:* The result may be an undefined address, but the operations
 
791
  ``` cpp
792
  T* operator op=(ptrdiff_t operand) volatile noexcept;
793
  T* operator op=(ptrdiff_t operand) noexcept;
794
  ```
795
 
796
+ *Constraints:* For the `volatile` overload of this function,
797
+ `is_always_lock_free` is `true`.
798
+
799
  *Effects:* Equivalent to:
800
  `return fetch_`*`key`*`(operand) `*`op`*` operand;`
801
 
802
  ### Member operators common to integers and pointers to objects <a id="atomics.types.memop">[[atomics.types.memop]]</a>
803
 
804
  ``` cpp
805
+ value_type operator++(int) volatile noexcept;
806
+ value_type operator++(int) noexcept;
807
  ```
808
 
809
+ *Constraints:* For the `volatile` overload of this function,
810
+ `is_always_lock_free` is `true`.
811
+
812
  *Effects:* Equivalent to: `return fetch_add(1);`
813
 
814
  ``` cpp
815
+ value_type operator--(int) volatile noexcept;
816
+ value_type operator--(int) noexcept;
817
  ```
818
 
819
+ *Constraints:* For the `volatile` overload of this function,
820
+ `is_always_lock_free` is `true`.
821
+
822
  *Effects:* Equivalent to: `return fetch_sub(1);`
823
 
824
  ``` cpp
825
+ value_type operator++() volatile noexcept;
826
+ value_type operator++() noexcept;
827
  ```
828
 
829
+ *Constraints:* For the `volatile` overload of this function,
830
+ `is_always_lock_free` is `true`.
831
+
832
  *Effects:* Equivalent to: `return fetch_add(1) + 1;`
833
 
834
  ``` cpp
835
+ value_type operator--() volatile noexcept;
836
+ value_type operator--() noexcept;
837
  ```
838
 
839
+ *Constraints:* For the `volatile` overload of this function,
840
+ `is_always_lock_free` is `true`.
841
+
842
  *Effects:* Equivalent to: `return fetch_sub(1) - 1;`
843
 
844
+ ### Partial specializations for smart pointers <a id="util.smartptr.atomic">[[util.smartptr.atomic]]</a>
845
+
846
+ The library provides partial specializations of the `atomic` template
847
+ for shared-ownership smart pointers [[smartptr]]. The behavior of all
848
+ operations is as specified in [[atomics.types.generic]], unless
849
+ specified otherwise. The template parameter `T` of these partial
850
+ specializations may be an incomplete type.
851
+
852
+ All changes to an atomic smart pointer in this subclause, and all
853
+ associated `use_count` increments, are guaranteed to be performed
854
+ atomically. Associated `use_count` decrements are sequenced after the
855
+ atomic operation, but are not required to be part of it. Any associated
856
+ deletion and deallocation are sequenced after the atomic update step and
857
+ are not part of the atomic operation.
858
+
859
+ [*Note 1*: If the atomic operation uses locks, locks acquired by the
860
+ implementation will be held when any `use_count` adjustments are
861
+ performed, and will not be held when any destruction or deallocation
862
+ resulting from this is performed. — *end note*]
863
+
864
+ [*Example 1*:
865
+
866
+ ``` cpp
867
+ template<typename T> class atomic_list {
868
+ struct node {
869
+ T t;
870
+ shared_ptr<node> next;
871
+ };
872
+ atomic<shared_ptr<node>> head;
873
+
874
+ public:
875
+ auto find(T t) const {
876
+ auto p = head.load();
877
+ while (p && p->t != t)
878
+ p = p->next;
879
+
880
+ return shared_ptr<node>(move(p));
881
+ }
882
+
883
+ void push_front(T t) {
884
+ auto p = make_shared<node>();
885
+ p->t = t;
886
+ p->next = head;
887
+ while (!head.compare_exchange_weak(p->next, p)) {}
888
+ }
889
+ };
890
+ ```
891
+
892
+ — *end example*]
893
+
894
+ #### Partial specialization for `shared_ptr` <a id="util.smartptr.atomic.shared">[[util.smartptr.atomic.shared]]</a>
895
+
896
+ ``` cpp
897
+ namespace std {
898
+ template<class T> struct atomic<shared_ptr<T>> {
899
+ using value_type = shared_ptr<T>;
900
+
901
+ static constexpr bool is_always_lock_free = implementation-defined // whether a given atomic type's operations are always lock free;
902
+ bool is_lock_free() const noexcept;
903
+
904
+ constexpr atomic() noexcept;
905
+ atomic(shared_ptr<T> desired) noexcept;
906
+ atomic(const atomic&) = delete;
907
+ void operator=(const atomic&) = delete;
908
+
909
+ shared_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
910
+ operator shared_ptr<T>() const noexcept;
911
+ void store(shared_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
912
+ void operator=(shared_ptr<T> desired) noexcept;
913
+
914
+ shared_ptr<T> exchange(shared_ptr<T> desired,
915
+ memory_order order = memory_order::seq_cst) noexcept;
916
+ bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
917
+ memory_order success, memory_order failure) noexcept;
918
+ bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
919
+ memory_order success, memory_order failure) noexcept;
920
+ bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
921
+ memory_order order = memory_order::seq_cst) noexcept;
922
+ bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
923
+ memory_order order = memory_order::seq_cst) noexcept;
924
+
925
+ void wait(shared_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
926
+ void notify_one() noexcept;
927
+ void notify_all() noexcept;
928
+
929
+ private:
930
+ shared_ptr<T> p; // exposition only
931
+ };
932
+ }
933
+ ```
934
+
935
+ ``` cpp
936
+ constexpr atomic() noexcept;
937
+ ```
938
+
939
+ *Effects:* Initializes `p{}`.
940
+
941
+ ``` cpp
942
+ atomic(shared_ptr<T> desired) noexcept;
943
+ ```
944
+
945
+ *Effects:* Initializes the object with the value `desired`.
946
+ Initialization is not an atomic operation [[intro.multithread]].
947
+
948
+ [*Note 1*: It is possible to have an access to an atomic object `A`
949
+ race with its construction, for example, by communicating the address of
950
+ the just-constructed object `A` to another thread via
951
+ `memory_order::relaxed` operations on a suitable atomic pointer
952
+ variable, and then immediately accessing `A` in the receiving thread.
953
+ This results in undefined behavior. — *end note*]
954
+
955
+ ``` cpp
956
+ void store(shared_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
957
+ ```
958
+
959
+ *Preconditions:* `order` is neither `memory_order::consume`,
960
+ `memory_order::acquire`, nor `memory_order::acq_rel`.
961
+
962
+ *Effects:* Atomically replaces the value pointed to by `this` with the
963
+ value of `desired` as if by `p.swap(desired)`. Memory is affected
964
+ according to the value of `order`.
965
+
966
+ ``` cpp
967
+ void operator=(shared_ptr<T> desired) noexcept;
968
+ ```
969
+
970
+ *Effects:* Equivalent to `store(desired)`.
971
+
972
+ ``` cpp
973
+ shared_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
974
+ ```
975
+
976
+ *Preconditions:* `order` is neither `memory_order::release` nor
977
+ `memory_order::acq_rel`.
978
+
979
+ *Effects:* Memory is affected according to the value of `order`.
980
+
981
+ *Returns:* Atomically returns `p`.
982
+
983
+ ``` cpp
984
+ operator shared_ptr<T>() const noexcept;
985
+ ```
986
+
987
+ *Effects:* Equivalent to: `return load();`
988
+
989
+ ``` cpp
990
+ shared_ptr<T> exchange(shared_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
991
+ ```
992
+
993
+ *Effects:* Atomically replaces `p` with `desired` as if by
994
+ `p.swap(desired)`. Memory is affected according to the value of `order`.
995
+ This is an atomic read-modify-write operation [[intro.races]].
996
+
997
+ *Returns:* Atomically returns the value of `p` immediately before the
998
+ effects.
999
+
1000
+ ``` cpp
1001
+ bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
1002
+ memory_order success, memory_order failure) noexcept;
1003
+ bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
1004
+ memory_order success, memory_order failure) noexcept;
1005
+ ```
1006
+
1007
+ *Preconditions:* `failure` is neither `memory_order::release` nor
1008
+ `memory_order::acq_rel`.
1009
+
1010
+ *Effects:* If `p` is equivalent to `expected`, assigns `desired` to `p`
1011
+ and has synchronization semantics corresponding to the value of
1012
+ `success`, otherwise assigns `p` to `expected` and has synchronization
1013
+ semantics corresponding to the value of `failure`.
1014
+
1015
+ *Returns:* `true` if `p` was equivalent to `expected`, `false`
1016
+ otherwise.
1017
+
1018
+ *Remarks:* Two `shared_ptr` objects are equivalent if they store the
1019
+ same pointer value and either share ownership or are both empty. The
1020
+ weak form may fail spuriously. See [[atomics.types.operations]].
1021
+
1022
+ If the operation returns `true`, `expected` is not accessed after the
1023
+ atomic update and the operation is an atomic read-modify-write
1024
+ operation [[intro.multithread]] on the memory pointed to by `this`.
1025
+ Otherwise, the operation is an atomic load operation on that memory, and
1026
+ `expected` is updated with the existing value read from the atomic
1027
+ object in the attempted atomic update. The `use_count` update
1028
+ corresponding to the write to `expected` is part of the atomic
1029
+ operation. The write to `expected` itself is not required to be part of
1030
+ the atomic operation.
1031
+
1032
+ ``` cpp
1033
+ bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
1034
+ memory_order order = memory_order::seq_cst) noexcept;
1035
+ ```
1036
+
1037
+ *Effects:* Equivalent to:
1038
+
1039
+ ``` cpp
1040
+ return compare_exchange_weak(expected, desired, order, fail_order);
1041
+ ```
1042
+
1043
+ where `fail_order` is the same as `order` except that a value of
1044
+ `memory_order::acq_rel` shall be replaced by the value
1045
+ `memory_order::acquire` and a value of `memory_order::release` shall be
1046
+ replaced by the value `memory_order::relaxed`.
1047
+
1048
+ ``` cpp
1049
+ bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
1050
+ memory_order order = memory_order::seq_cst) noexcept;
1051
+ ```
1052
+
1053
+ *Effects:* Equivalent to:
1054
+
1055
+ ``` cpp
1056
+ return compare_exchange_strong(expected, desired, order, fail_order);
1057
+ ```
1058
+
1059
+ where `fail_order` is the same as `order` except that a value of
1060
+ `memory_order::acq_rel` shall be replaced by the value
1061
+ `memory_order::acquire` and a value of `memory_order::release` shall be
1062
+ replaced by the value `memory_order::relaxed`.
1063
+
1064
+ ``` cpp
1065
+ void wait(shared_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
1066
+ ```
1067
+
1068
+ *Preconditions:* `order` is neither `memory_order::release` nor
1069
+ `memory_order::acq_rel`.
1070
+
1071
+ *Effects:* Repeatedly performs the following steps, in order:
1072
+
1073
+ - Evaluates `load(order)` and compares it to `old`.
1074
+ - If the two are not equivalent, returns.
1075
+ - Blocks until it is unblocked by an atomic notifying operation or is
1076
+ unblocked spuriously.
1077
+
1078
+ *Remarks:* Two `shared_ptr` objects are equivalent if they store the
1079
+ same pointer and either share ownership or are both empty. This function
1080
+ is an atomic waiting operation [[atomics.wait]].
1081
+
1082
+ ``` cpp
1083
+ void notify_one() noexcept;
1084
+ ```
1085
+
1086
+ *Effects:* Unblocks the execution of at least one atomic waiting
1087
+ operation that is eligible to be unblocked [[atomics.wait]] by this
1088
+ call, if any such atomic waiting operations exist.
1089
+
1090
+ *Remarks:* This function is an atomic notifying
1091
+ operation [[atomics.wait]].
1092
+
1093
+ ``` cpp
1094
+ void notify_all() noexcept;
1095
+ ```
1096
+
1097
+ *Effects:* Unblocks the execution of all atomic waiting operations that
1098
+ are eligible to be unblocked [[atomics.wait]] by this call.
1099
+
1100
+ *Remarks:* This function is an atomic notifying
1101
+ operation [[atomics.wait]].
1102
+
1103
+ #### Partial specialization for `weak_ptr` <a id="util.smartptr.atomic.weak">[[util.smartptr.atomic.weak]]</a>
1104
+
1105
+ ``` cpp
1106
+ namespace std {
1107
+ template<class T> struct atomic<weak_ptr<T>> {
1108
+ using value_type = weak_ptr<T>;
1109
+
1110
+ static constexpr bool is_always_lock_free = implementation-defined // whether a given atomic type's operations are always lock free;
1111
+ bool is_lock_free() const noexcept;
1112
+
1113
+ constexpr atomic() noexcept;
1114
+ atomic(weak_ptr<T> desired) noexcept;
1115
+ atomic(const atomic&) = delete;
1116
+ void operator=(const atomic&) = delete;
1117
+
1118
+ weak_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
1119
+ operator weak_ptr<T>() const noexcept;
1120
+ void store(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
1121
+ void operator=(weak_ptr<T> desired) noexcept;
1122
+
1123
+ weak_ptr<T> exchange(weak_ptr<T> desired,
1124
+ memory_order order = memory_order::seq_cst) noexcept;
1125
+ bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
1126
+ memory_order success, memory_order failure) noexcept;
1127
+ bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
1128
+ memory_order success, memory_order failure) noexcept;
1129
+ bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
1130
+ memory_order order = memory_order::seq_cst) noexcept;
1131
+ bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
1132
+ memory_order order = memory_order::seq_cst) noexcept;
1133
+
1134
+ void wait(weak_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
1135
+ void notify_one() noexcept;
1136
+ void notify_all() noexcept;
1137
+
1138
+ private:
1139
+ weak_ptr<T> p; // exposition only
1140
+ };
1141
+ }
1142
+ ```
1143
+
1144
+ ``` cpp
1145
+ constexpr atomic() noexcept;
1146
+ ```
1147
+
1148
+ *Effects:* Initializes `p{}`.
1149
+
1150
+ ``` cpp
1151
+ atomic(weak_ptr<T> desired) noexcept;
1152
+ ```
1153
+
1154
+ *Effects:* Initializes the object with the value `desired`.
1155
+ Initialization is not an atomic operation [[intro.multithread]].
1156
+
1157
+ [*Note 1*: It is possible to have an access to an atomic object `A`
1158
+ race with its construction, for example, by communicating the address of
1159
+ the just-constructed object `A` to another thread via
1160
+ `memory_order::relaxed` operations on a suitable atomic pointer
1161
+ variable, and then immediately accessing `A` in the receiving thread.
1162
+ This results in undefined behavior. — *end note*]
1163
+
1164
+ ``` cpp
1165
+ void store(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
1166
+ ```
1167
+
1168
+ *Preconditions:* `order` is neither `memory_order::consume`,
1169
+ `memory_order::acquire`, nor `memory_order::acq_rel`.
1170
+
1171
+ *Effects:* Atomically replaces the value pointed to by `this` with the
1172
+ value of `desired` as if by `p.swap(desired)`. Memory is affected
1173
+ according to the value of `order`.
1174
+
1175
+ ``` cpp
1176
+ void operator=(weak_ptr<T> desired) noexcept;
1177
+ ```
1178
+
1179
+ *Effects:* Equivalent to `store(desired)`.
1180
+
1181
+ ``` cpp
1182
+ weak_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
1183
+ ```
1184
+
1185
+ *Preconditions:* `order` is neither `memory_order::release` nor
1186
+ `memory_order::acq_rel`.
1187
+
1188
+ *Effects:* Memory is affected according to the value of `order`.
1189
+
1190
+ *Returns:* Atomically returns `p`.
1191
+
1192
+ ``` cpp
1193
+ operator weak_ptr<T>() const noexcept;
1194
+ ```
1195
+
1196
+ *Effects:* Equivalent to: `return load();`
1197
+
1198
+ ``` cpp
1199
+ weak_ptr<T> exchange(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
1200
+ ```
1201
+
1202
+ *Effects:* Atomically replaces `p` with `desired` as if by
1203
+ `p.swap(desired)`. Memory is affected according to the value of `order`.
1204
+ This is an atomic read-modify-write operation [[intro.races]].
1205
+
1206
+ *Returns:* Atomically returns the value of `p` immediately before the
1207
+ effects.
1208
+
1209
+ ``` cpp
1210
+ bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
1211
+ memory_order success, memory_order failure) noexcept;
1212
+ bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
1213
+ memory_order success, memory_order failure) noexcept;
1214
+ ```
1215
+
1216
+ *Preconditions:* `failure` is neither `memory_order::release` nor
1217
+ `memory_order::acq_rel`.
1218
+
1219
+ *Effects:* If `p` is equivalent to `expected`, assigns `desired` to `p`
1220
+ and has synchronization semantics corresponding to the value of
1221
+ `success`, otherwise assigns `p` to `expected` and has synchronization
1222
+ semantics corresponding to the value of `failure`.
1223
+
1224
+ *Returns:* `true` if `p` was equivalent to `expected`, `false`
1225
+ otherwise.
1226
+
1227
+ *Remarks:* Two `weak_ptr` objects are equivalent if they store the same
1228
+ pointer value and either share ownership or are both empty. The weak
1229
+ form may fail spuriously. See [[atomics.types.operations]].
1230
+
1231
+ If the operation returns `true`, `expected` is not accessed after the
1232
+ atomic update and the operation is an atomic read-modify-write
1233
+ operation [[intro.multithread]] on the memory pointed to by `this`.
1234
+ Otherwise, the operation is an atomic load operation on that memory, and
1235
+ `expected` is updated with the existing value read from the atomic
1236
+ object in the attempted atomic update. The `use_count` update
1237
+ corresponding to the write to `expected` is part of the atomic
1238
+ operation. The write to `expected` itself is not required to be part of
1239
+ the atomic operation.
1240
+
1241
+ ``` cpp
1242
+ bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
1243
+ memory_order order = memory_order::seq_cst) noexcept;
1244
+ ```
1245
+
1246
+ *Effects:* Equivalent to:
1247
+
1248
+ ``` cpp
1249
+ return compare_exchange_weak(expected, desired, order, fail_order);
1250
+ ```
1251
+
1252
+ where `fail_order` is the same as `order` except that a value of
1253
+ `memory_order::acq_rel` shall be replaced by the value
1254
+ `memory_order::acquire` and a value of `memory_order::release` shall be
1255
+ replaced by the value `memory_order::relaxed`.
1256
+
1257
+ ``` cpp
1258
+ bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
1259
+ memory_order order = memory_order::seq_cst) noexcept;
1260
+ ```
1261
+
1262
+ *Effects:* Equivalent to:
1263
+
1264
+ ``` cpp
1265
+ return compare_exchange_strong(expected, desired, order, fail_order);
1266
+ ```
1267
+
1268
+ where `fail_order` is the same as `order` except that a value of
1269
+ `memory_order::acq_rel` shall be replaced by the value
1270
+ `memory_order::acquire` and a value of `memory_order::release` shall be
1271
+ replaced by the value `memory_order::relaxed`.
1272
+
1273
+ ``` cpp
1274
+ void wait(weak_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
1275
+ ```
1276
+
1277
+ *Preconditions:* `order` is neither `memory_order::release` nor
1278
+ `memory_order::acq_rel`.
1279
+
1280
+ *Effects:* Repeatedly performs the following steps, in order:
1281
+
1282
+ - Evaluates `load(order)` and compares it to `old`.
1283
+ - If the two are not equivalent, returns.
1284
+ - Blocks until it is unblocked by an atomic notifying operation or is
1285
+ unblocked spuriously.
1286
+
1287
+ *Remarks:* Two `weak_ptr` objects are equivalent if they store the same
1288
+ pointer and either share ownership or are both empty. This function is
1289
+ an atomic waiting operation [[atomics.wait]].
1290
+
1291
+ ``` cpp
1292
+ void notify_one() noexcept;
1293
+ ```
1294
+
1295
+ *Effects:* Unblocks the execution of at least one atomic waiting
1296
+ operation that is eligible to be unblocked [[atomics.wait]] by this
1297
+ call, if any such atomic waiting operations exist.
1298
+
1299
+ *Remarks:* This function is an atomic notifying
1300
+ operation [[atomics.wait]].
1301
+
1302
+ ``` cpp
1303
+ void notify_all() noexcept;
1304
+ ```
1305
+
1306
+ *Effects:* Unblocks the execution of all atomic waiting operations that
1307
+ are eligible to be unblocked [[atomics.wait]] by this call.
1308
+
1309
+ *Remarks:* This function is an atomic notifying
1310
+ operation [[atomics.wait]].
1311
+