LoginSignup
0
0

More than 1 year has passed since last update.

33.5 Atomic operations [atomics] C++N4910:2022 (711) p1675.cpp

Posted at

はじめに(Introduction)

N4910 Working Draft, Standard for Programming Language C++

C++ n4910は、ISO/IEC JTC1 SC22 WG21の作業原案(Working Draft)です。
公式のISO/IEC 14882原本ではありません。
ISO/IEC JTC1 SC22 のWG21を含むいくつかのWGでは、可能な限り作業文書を公開し、幅広い意見を求めています。

ISO/IEC JTC1 SC7からISO/IEC JTC1 SC22リエゾンとして、2000年頃、C/C++の品質向上に貢献しようとした活動をしていたことがあります。その頃は、まだISO/IEC TS 17961の原案が出る前です。Cの精神が優勢で、セキュリティ対策は補助的な位置付けでした。ISO/IEC TS 17961の制定と、C/C++のライブラリ類の見直しと、C++の進化はどんどん進んでいきます。 

進化の具合が、どちらに行こうとしているかは、コンパイルて実行させてみないとわかりません。C/C++の規格案の電子ファイルは、そのままコンパイルできる形式であるとよいと主張してきました。MISRA-C/C++, CERTC/C++でも同様です。MISRA-C/C++は、Example Suiteという形で、コード断片をコンパイルできる形で提供するようになりました。

一連の記事はコード断片をコンパイルできる形にする方法を検討してコンパイル、リンク、実行して、規格案の原文と処理系(g++, Clang++)との違いを確認し、技術内容を検討し、ISO/IEC JTC1 SC22 WG21にフィードバックするために用います。
また、CERT C++, MISRA C++等のコーディング標準のコード断片をコンパイルする際の参考にさせていただこうと考えています。CERT C++, MISRA C++が標準化の動きとの時間的なずれがあれば確認できれば幸いです。また、boostライブラリとの関連、Linux OS, 箱庭プロジェクト、g++(GCC), clang++(LLVM)との関係も調査中です。
何か、抜け漏れ、耳より情報がありましたらおしらせくださると幸いです。

背景(back ground)

C/C++でコンパイルエラーが出ると、途方にくれることがしばしばあります。
何回かに1回は、該当するエラーが検索できます。
ただ、条件が違っていて、そこでの修正方法では目的を達成しないこともしばしばです。いろいろな条件のコンパイルエラーとその対応方法について、広く記録することによって、いつか同じエラーに遭遇した時にやくに立つことを目指しています。

この半年の間で、三度、自分のネットでの記録に助けられたことがあります。
また過去に解決できなかった記録を10種類以上、最近になって解決できたことがあります。それは、主に次の3つの情報に基づいています。

cpprefjp - C++日本語リファレンス

コンパイラの実装状況

また
https://researchmap.jp/joub9b3my-1797580/#_1797580
に記載したサイトのお世話になっています。

作業方針(sequence)

Clang++では-std=c++03, C++2bの2種類
g++では-std=c++03, c++2bの2種類
でコンパイルし、

1)コンパイルエラーを収集する。
2)コンパイルエラーをなくす方法を検討する。
コンパイルエラーになる例を示すだけが目的のコードは、コンパイルエラーをなくすのではなく、コンパイルエラーの種類を収集するだけにする。
文法を示すのが目的のコード場合に、コンパイルエラーをなくすのに手間がかかる場合は、順次作業します。
3)リンクエラーをなくす方法を検討する。
文法を示すのが目的のコード場合に、リンクエラーをなくすのに手間がかかる場合は、順次作業します。
4)意味のある出力を作る。
コンパイル、リンクが通っても、意味のある出力を示そうとすると、コンパイル・リンクエラーが出て収拾できそうにない場合がある。順次作業します。

1)だけのものから4)まで進んだものと色々ある状態です。一歩でも前に進むご助言をお待ちしています。「検討事項」の欄に現状を記録するようにしています。

C++N4910:2022 Standard Working Draft on ISO/IEC 14882(0) sample code compile list

bash
$ docker run -v /Users/ogawakiyoshi/n4910:/Users/ogawakiyoshi/n4910 -it kaizenjapan/n4910 /bin/bash

C++N4741, 2018 Standard Working Draft on ISO/IEC 14882 sample code compile list

C++N4606, 2016符号断片編纂一覧(example code compile list)

C++N4606, 2016 Working Draft 2016, ISO/IEC 14882, C++ standard(1) Example code compile list
https://qiita.com/kaizen_nagoya/items/df5d62c35bd6ed1c3d43/

C++N3242, 2011 sample code compile list on clang++ and g++

読書感想文

CコンパイラによるC言語規格の読書感想文として掲載しています。

コンパイル実験が、C++N4910に対する、G++とClang++による感想文だということご理解いただけると幸いです。

読書感想文は人間かAIだけが作るものとは限りません。
本(電子書籍を含む)を入力として、その内容に対する文字列を読書感想文として受け止めましょう。
元の文章をあり方、コンパイルできるように電子化しておくこと、コンパイラが解釈可能な断片の作り方など。

個人開発

Cコンパイラの試験を一人でもくもくとやっているのは個人開発の一つの姿です。

<この項は書きかけです。順次追記します。>

編纂器(Compiler)

clang++ --version

20220826 以前

Debian clang version 14.0.5-++20220610033153+c12386ae247c-1~exp1~20220610153237.151
Target: x86_64-pc-linux-gnu, Thread model: posix, InstalledDir: /usr/bin

20220827 以降

Debian clang version 14.0.6-++20220622053050+f28c006a5895-1~exp1~20220622173135.152
Target: x86_64-pc-linux-gnu Thread model: posix InstalledDir: /usr/bin

g++- --version

g++ (GCC) 12.1.0 Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

33.5 Atomic operations [atomics] C++N4910:2022 (711) p1675.cpp

算譜(source code)

p1675.cpp
// C++N4910 Committee Draft, Standard for Programming Language C++
// http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/n4910.pdf
const char * n4910 = "33.5 Atomic operations [atomics] C++N4910:2022 (711) p1675.cpp";
// Debian clang version 14.0.5-++20220610033153+c12386ae247c-
// g++ (GCC) 12.1.0 Copyright (C) 2022 Free Software Foundation, Inc.
// Edited by Dr. OGAWA Kiyoshi. Compile procedure and results record.
// C++N4910:2022 Standard Working Draft on ISO/IEC 14882(0) sample code compile list
// https://qiita.com/kaizen_nagoya/items/fc957ddddd402004bb91

#include "N4910.h"

using namespace std;

// 33.5.1 General
[atomics]
[atomics.general]
33.5
1 Subclause 33.5 describes components for fine-grained atomic access. This access is provided via operations on atomic objects.
33.5.2 Header <atomic> synopsis [atomics.syn]
     namespace std {
// 33.5.4, order and consistency
enum class memory_order : unspecified;
§ 33.5.2 1674
© ISO/IEC N4910
  template<class T>
    T kill_dependency(T y) noexcept;
}
// 33.5.5, lock-free property
#define ATOMIC_BOOL_LOCK_FREE unspecified #define ATOMIC_CHAR_LOCK_FREE unspecified #define ATOMIC_CHAR8_T_LOCK_FREE unspecified #define ATOMIC_CHAR16_T_LOCK_FREE unspecified #define ATOMIC_CHAR32_T_LOCK_FREE unspecified #define ATOMIC_WCHAR_T_LOCK_FREE unspecified #define ATOMIC_SHORT_LOCK_FREE unspecified #define ATOMIC_INT_LOCK_FREE unspecified #define ATOMIC_LONG_LOCK_FREE unspecified #define ATOMIC_LLONG_LOCK_FREE unspecified #define ATOMIC_POINTER_LOCK_FREE unspecified
namespace std {
// 33.5.7, class template atomic_ref template<class T> struct atomic_ref;
// 33.5.7.5, partial specialization for pointers template<class T> struct atomic_ref<T*>;
// 33.5.8, class template atomic template<class T> struct atomic;
// 33.5.8.5, partial specialization for pointers template<class T> struct atomic<T*>;
// 33.5.9, non-member functions template<class T>
    bool atomic_is_lock_free(const volatile atomic<T>*) noexcept;
  template<class T>
    bool atomic_is_lock_free(const atomic<T>*) noexcept;
  template<class T>
    void atomic_store(volatile atomic<T>*, typename atomic<T>::value_type) noexcept;
  template<class T>
    void atomic_store(atomic<T>*, typename atomic<T>::value_type) noexcept;
  template<class T>
    void atomic_store_explicit(volatile atomic<T>*, typename atomic<T>::value_type,
                               memory_order) noexcept;
  template<class T>
    void atomic_store_explicit(atomic<T>*, typename atomic<T>::value_type,
                               memory_order) noexcept;
  template<class T>
    T atomic_load(const volatile atomic<T>*) noexcept;
  template<class T>
    T atomic_load(const atomic<T>*) noexcept;
  template<class T>
    T atomic_load_explicit(const volatile atomic<T>*, memory_order) noexcept;
  template<class T>
    T atomic_load_explicit(const atomic<T>*, memory_order) noexcept;
  template<class T>
    T atomic_exchange(volatile atomic<T>*, typename atomic<T>::value_type) noexcept;
  template<class T>
    T atomic_exchange(atomic<T>*, typename atomic<T>::value_type) noexcept;
  template<class T>
    T atomic_exchange_explicit(volatile atomic<T>*, typename atomic<T>::value_type,
                               memory_order) noexcept;
  template<class T>
    T atomic_exchange_explicit(atomic<T>*, typename atomic<T>::value_type,
                               memory_order) noexcept;
  template<class T>
    bool atomic_compare_exchange_weak(volatile atomic<T>*,
§ 33.5.2
1675
typename atomic<T>::value_type*,

© ISO/IEC N4910
§ 33.5.2
1676
                                    typename atomic<T>::value_type) noexcept;
template<class T>
  bool atomic_compare_exchange_weak(atomic<T>*,
                                    typename atomic<T>::value_type*,
                                    typename atomic<T>::value_type) noexcept;
template<class T>
  bool atomic_compare_exchange_strong(volatile atomic<T>*,
                                      typename atomic<T>::value_type*,
                                      typename atomic<T>::value_type) noexcept;
template<class T>
  bool atomic_compare_exchange_strong(atomic<T>*,
                                      typename atomic<T>::value_type*,
                                      typename atomic<T>::value_type) noexcept;
template<class T>
  bool atomic_compare_exchange_weak_explicit(volatile atomic<T>*,
                                             typename atomic<T>::value_type*,
                                             typename atomic<T>::value_type,
                                             memory_order, memory_order) noexcept;
template<class T>
  bool atomic_compare_exchange_weak_explicit(atomic<T>*,
                                             typename atomic<T>::value_type*,
                                             typename atomic<T>::value_type,
                                             memory_order, memory_order) noexcept;
template<class T>
  bool atomic_compare_exchange_strong_explicit(volatile atomic<T>*,
                                               typename atomic<T>::value_type*,
                                               typename atomic<T>::value_type,
                                               memory_order, memory_order) noexcept;
template<class T>
  bool atomic_compare_exchange_strong_explicit(atomic<T>*,
                                               typename atomic<T>::value_type*,
                                               typename atomic<T>::value_type,
                                               memory_order, memory_order) noexcept;
template<class T>
  T atomic_fetch_add(volatile atomic<T>*, typename atomic<T>::difference_type) noexcept;
template<class T>
  T atomic_fetch_add(atomic<T>*, typename atomic<T>::difference_type) noexcept;
template<class T>
  T atomic_fetch_add_explicit(volatile atomic<T>*, typename atomic<T>::difference_type,
                              memory_order) noexcept;
template<class T>
  T atomic_fetch_add_explicit(atomic<T>*, typename atomic<T>::difference_type,
                              memory_order) noexcept;
template<class T>
  T atomic_fetch_sub(volatile atomic<T>*, typename atomic<T>::difference_type) noexcept;
template<class T>
  T atomic_fetch_sub(atomic<T>*, typename atomic<T>::difference_type) noexcept;
template<class T>
  T atomic_fetch_sub_explicit(volatile atomic<T>*, typename atomic<T>::difference_type,
                              memory_order) noexcept;
template<class T>
  T atomic_fetch_sub_explicit(atomic<T>*, typename atomic<T>::difference_type,
                              memory_order) noexcept;
template<class T>
  T atomic_fetch_and(volatile atomic<T>*, typename atomic<T>::value_type) noexcept;
template<class T>
  T atomic_fetch_and(atomic<T>*, typename atomic<T>::value_type) noexcept;
template<class T>
  T atomic_fetch_and_explicit(volatile atomic<T>*, typename atomic<T>::value_type,
                              memory_order) noexcept;
template<class T>
  T atomic_fetch_and_explicit(atomic<T>*, typename atomic<T>::value_type,
memory_order) noexcept;

© ISO/IEC N4910
§ 33.5.2
1677
template<class T>
  T atomic_fetch_or(volatile atomic<T>*, typename atomic<T>::value_type) noexcept;
template<class T>
  T atomic_fetch_or(atomic<T>*, typename atomic<T>::value_type) noexcept;
template<class T>
  T atomic_fetch_or_explicit(volatile atomic<T>*, typename atomic<T>::value_type,
                             memory_order) noexcept;
template<class T>
  T atomic_fetch_or_explicit(atomic<T>*, typename atomic<T>::value_type,
                             memory_order) noexcept;
template<class T>
  T atomic_fetch_xor(volatile atomic<T>*, typename atomic<T>::value_type) noexcept;
template<class T>
  T atomic_fetch_xor(atomic<T>*, typename atomic<T>::value_type) noexcept;
template<class T>
  T atomic_fetch_xor_explicit(volatile atomic<T>*, typename atomic<T>::value_type,
                              memory_order) noexcept;
template<class T>
  T atomic_fetch_xor_explicit(atomic<T>*, typename atomic<T>::value_type,
                              memory_order) noexcept;
template<class T>
  void atomic_wait(const volatile atomic<T>*, typename atomic<T>::value_type);
template<class T>
  void atomic_wait(const atomic<T>*, typename atomic<T>::value_type);
template<class T>
  void atomic_wait_explicit(const volatile atomic<T>*, typename atomic<T>::value_type,
                            memory_order);
template<class T>
  void atomic_wait_explicit(const atomic<T>*, typename atomic<T>::value_type,
                            memory_order);
template<class T>
  void atomic_notify_one(volatile atomic<T>*);
template<class T>
  void atomic_notify_one(atomic<T>*);
template<class T>
  void atomic_notify_all(volatile atomic<T>*);
template<class T>
  void atomic_notify_all(atomic<T>*);
// 33.5.3, type aliases using atomic_bool using atomic_char using atomic_schar using atomic_uchar using atomic_short using atomic_ushort using atomic_int using atomic_uint using atomic_long using atomic_ulong using atomic_llong using atomic_ullong using atomic_char8_t using atomic_char16_t using atomic_char32_t using atomic_wchar_t
using atomic_int8_t
using atomic_uint8_t
using atomic_int16_t
using atomic_uint16_t
using atomic_int32_t
using atomic_uint32_t
= atomic<bool>;
= atomic<char>;
= atomic<signed char>;
= atomic<unsigned char>;
= atomic<short>;
= atomic<unsigned short>;
= atomic<int>;
= atomic<unsigned int>;
= atomic<long>;
= atomic<unsigned long>;
= atomic<long long>;
= atomic<unsigned long long>;
= atomic<char8_t>;
= atomic<char16_t>;
= atomic<char32_t>;
= atomic<wchar_t>;
= atomic<int8_t>;
= atomic<uint8_t>;
= atomic<int16_t>;
= atomic<uint16_t>;
= atomic<int32_t>;
= atomic<uint32_t>;

© ISO/IEC
N4910
using atomic_int64_t
using atomic_uint64_t
using atomic_int_least8_t
using atomic_uint_least8_t  = atomic<uint_least8_t>;
using atomic_int_least16_t  = atomic<int_least16_t>;
using atomic_uint_least16_t = atomic<uint_least16_t>;
using atomic_int_least32_t  = atomic<int_least32_t>;
using atomic_uint_least32_t = atomic<uint_least32_t>;
using atomic_int_least64_t  = atomic<int_least64_t>;
using atomic_uint_least64_t = atomic<uint_least64_t>;
using atomic_int_fast8_t
using atomic_uint_fast8_t
using atomic_int_fast16_t
using atomic_uint_fast16_t  = atomic<uint_fast16_t>;
using atomic_int_fast32_t   = atomic<int_fast32_t>;
using atomic_uint_fast32_t  = atomic<uint_fast32_t>;
using atomic_int_fast64_t   = atomic<int_fast64_t>;
using atomic_uint_fast64_t  = atomic<uint_fast64_t>;
using atomic_intptr_t
using atomic_uintptr_t
using atomic_size_t
using atomic_ptrdiff_t
using atomic_intmax_t
using atomic_uintmax_t
= atomic<intptr_t>;
= atomic<uintptr_t>;
= atomic<size_t>;
= atomic<ptrdiff_t>;
= atomic<intmax_t>;
= atomic<uintmax_t>;
= atomic<int64_t>;
= atomic<uint64_t>;
= atomic<int_least8_t>;
= atomic<int_fast8_t>;
= atomic<uint_fast8_t>;
= atomic<int_fast16_t>;
using atomic_signed_lock_free = see below; using atomic_unsigned_lock_free = see below;
// 33.5.10, flag type and operations struct atomic_flag;
  bool atomic_flag_test(const volatile atomic_flag*) noexcept;
  bool atomic_flag_test(const atomic_flag*) noexcept;
  bool atomic_flag_test_explicit(const volatile atomic_flag*, memory_order) noexcept;
  bool atomic_flag_test_explicit(const atomic_flag*, memory_order) noexcept;
  bool atomic_flag_test_and_set(volatile atomic_flag*) noexcept;
  bool atomic_flag_test_and_set(atomic_flag*) noexcept;
  bool atomic_flag_test_and_set_explicit(volatile atomic_flag*, memory_order) noexcept;
  bool atomic_flag_test_and_set_explicit(atomic_flag*, memory_order) noexcept;
  void atomic_flag_clear(volatile atomic_flag*) noexcept;
  void atomic_flag_clear(atomic_flag*) noexcept;
  void atomic_flag_clear_explicit(volatile atomic_flag*, memory_order) noexcept;
  void atomic_flag_clear_explicit(atomic_flag*, memory_order) noexcept;
  void atomic_flag_wait(const volatile atomic_flag*, bool) noexcept;
  void atomic_flag_wait(const atomic_flag*, bool) noexcept;
  void atomic_flag_wait_explicit(const volatile atomic_flag*,
                                 bool, memory_order) noexcept;
  void atomic_flag_wait_explicit(const atomic_flag*,
                                 bool, memory_order) noexcept;
  void atomic_flag_notify_one(volatile atomic_flag*) noexcept;
  void atomic_flag_notify_one(atomic_flag*) noexcept;
  void atomic_flag_notify_all(volatile atomic_flag*) noexcept;
  void atomic_flag_notify_all(atomic_flag*) noexcept;
// 33.5.11, fences
extern "C" void atomic_thread_fence(memory_order) noexcept; extern "C" void atomic_signal_fence(memory_order) noexcept;
}
§ 33.5.2 1678

© ISO/IEC N4910
33.5.3 Type aliases [atomics.alias]
1 The type aliases atomic_intN _t, atomic_uintN _t, atomic_intptr_t, and atomic_uintptr_t are defined if and only if intN_t, uintN_t, intptr_t, and uintptr_t are defined, respectively.
2 The type aliases atomic_signed_lock_free and atomic_unsigned_lock_free name specializations of atomic whose template arguments are integral types, respectively signed and unsigned, and whose is_- always_lock_free property is true.
[Note 1: These aliases are optional in freestanding implementations (16.4.2.4). end note]
Implementations should choose for these aliases the integral specializations of atomic for which the atomic
waiting and notifying operations (33.5.6) are most efficient. 33.5.4 Order and consistency
namespace std {
enum class memory_order : unspecified {
      relaxed, consume, acquire, release, acq_rel, seq_cst
    };
    inline constexpr memory_order memory_order_relaxed = memory_order::relaxed;
    inline constexpr memory_order memory_order_consume = memory_order::consume;
    inline constexpr memory_order memory_order_acquire = memory_order::acquire;
    inline constexpr memory_order memory_order_release = memory_order::release;
    inline constexpr memory_order memory_order_acq_rel = memory_order::acq_rel;
    inline constexpr memory_order memory_order_seq_cst = memory_order::seq_cst;
}
[atomics.order]
1 The enumeration memory_order specifies the detailed regular (non-atomic) memory synchronization order as defined in 6.9.2 and may provide for operation ordering. Its enumerated values and their meanings are as follows:
 memory_order::relaxed: no operation orders memory.
 memory_order::release, memory_order::acq_rel, and memory_order::seq_cst: a store operation
performs a release operation on the affected memory location.
 memory_order::consume: a load operation performs a consume operation on the affected memory location.
[Note 1: Prefer memory_order::acquire, which provides stronger guarantees than memory_order::consume. Implementations have found it infeasible to provide performance better than that of memory_order::acquire. Specification revisions are under consideration. end note]
 memory_order::acquire, memory_order::acq_rel, and memory_order::seq_cst: a load operation performs an acquire operation on the affected memory location.
(1.1) (1.2)
(1.3)
(1.4)
2 An atomic operation A that performs a release operation on an atomic object M synchronizes with an atomic operation B that performs an acquire operation on M and takes its value from any side effect in the release sequence headed by A.
3 An atomic operation A on some atomic object M is coherence-ordered before another atomic operation B on M if
(3.1) (3.2) (3.3)
(3.4)
 A is a modification, and B reads the value stored by A, or
 A precedes B in the modification order of M, or
 A and B are not the same atomic read-modify-write operation, and there exists an atomic modification X of M such that A reads the value stored by X and X precedes B in the modification order of M, or
 there exists an atomic modification X of M such that A is coherence-ordered before X and X is coherence-ordered before B.
[Note 2: Atomic operations specifying memory_order::relaxed are relaxed with respect to memory ordering. Imple- mentations must still guarantee that any given atomic access to a particular atomic object be indivisible with respect to all other atomic accesses to that object. end note]
4 There is a single total order S on all memory_order::seq_cst operations, including fences, that satisfies the following constraints. First, if A and B are memory_order::seq_cst operations and A strongly happens before B, then A precedes B in S. Second, for every pair of atomic operations A and B on an object M, where A is coherence-ordered before B, the following four conditions are required to be satisfied by S:
§ 33.5.4 1679

(4.1) (4.2)
(4.3)
(4.4)
12 13
Effects: The argument does not carry a dependency to the return value (6.9.2). Returns: y.
© ISO/IEC N4910
 if A and B are both memory_order::seq_cst operations, then A precedes B in S; and
 if A is a memory_order::seq_cst operation and B happens before a memory_order::seq_cst fence
Y, then A precedes Y in S; and
 if a memory_order::seq_cst fence X happens before A and B is a memory_order::seq_cst operation,
then X precedes B in S; and
 if a memory_order::seq_cst fence X happens before A and B happens before a memory_order::seq_-
cst fence Y , then X precedes Y in S.
5 [Note
ensures that a memory_order::seq_cst load A of M gets its value either from the last modification of M that precedes A in S or from some non-memory_order::seq_cst modification of M that does not happen before any modification of M that precedes A in S. end note]
6 [Note 4: We do not require that S be consistent with happens before (6.9.2.2). This allows more efficient implementation of memory_order::acquire and memory_order::release on some machine architectures. It can produce surprising results when these are mixed with memory_order::seq_cst accesses. end note]
7 [Note 5: memory_order::seq_cst ensures sequential consistency only for a program that is free of data races and uses exclusively memory_order::seq_cst atomic operations. Any use of weaker ordering will invalidate this guarantee unless extreme care is used. In many cases, memory_order::seq_cst atomic operations are reorderable with respect to other atomic operations performed by the same thread. end note]
8 Implementations should ensure that no out-of-thin-air values are computed that circularly depend on their own computation.
[Note 6: For example, with x and y initially zero,
// Thread 1:
     r1 = y.load(memory_order::relaxed);
     x.store(r1, memory_order::relaxed);
// Thread 2:
     r2 = x.load(memory_order::relaxed);
     y.store(r2, memory_order::relaxed);
this recommendation discourages producing r1 == r2 == 42, since the store of 42 to y is only possible if the store to x stores 42, which circularly depends on the store to y storing 42. Note that without this restriction, such an execution is possible. end note]
9 [Note 7: The recommendation similarly disallows r1 == r2 == 42 in the following example, with x and y again initially zero:
// Thread 1:
     r1 = x.load(memory_order::relaxed);
     if (r1 == 42) y.store(42, memory_order::relaxed);
// Thread 2:
     r2 = y.load(memory_order::relaxed);
     if (r2 == 42) x.store(42, memory_order::relaxed);
end note]
10 Atomic read-modify-write operations shall always read the last value (in the modification order) written
before the write associated with the read-modify-write operation.
11 Implementations should make atomic stores visible to atomic loads within a reasonable amount of time.
   template<class T>
     T kill_dependency(T y) noexcept;
3: This definition ensures that S is consistent with the modification order of any atomic object M. It also
33.5.5 Lock-free property [atomics.lockfree]
#define ATOMIC_BOOL_LOCK_FREE unspecified #define ATOMIC_CHAR_LOCK_FREE unspecified #define ATOMIC_CHAR8_T_LOCK_FREE unspecified #define ATOMIC_CHAR16_T_LOCK_FREE unspecified #define ATOMIC_CHAR32_T_LOCK_FREE unspecified #define ATOMIC_WCHAR_T_LOCK_FREE unspecified
§ 33.5.5 1680

(2.1) (2.2) (2.3) (2.4) (2.5)
 atomic<T>::wait,
 atomic_flag::wait,
 atomic_wait and atomic_wait_explicit,
 atomic_flag_wait and atomic_flag_wait_explicit, and  atomic_ref<T>::wait.
(3.1) (3.2) (3.3) (3.4) (3.5)
 atomic<T>::notify_one and atomic<T>::notify_all,
 atomic_flag::notify_one and atomic_flag::notify_all,
 atomic_notify_one and atomic_notify_all,
 atomic_flag_notify_one and atomic_flag_notify_all, and  atomic_ref<T>::notify_one and atomic_ref<T>::notify_all.
(4.1) (4.2) (4.3)
© ISO/IEC
N4910
#define ATOMIC_SHORT_LOCK_FREE unspecified #define ATOMIC_INT_LOCK_FREE unspecified #define ATOMIC_LONG_LOCK_FREE unspecified #define ATOMIC_LLONG_LOCK_FREE unspecified #define ATOMIC_POINTER_LOCK_FREE unspecified
1 The ATOMIC_..._LOCK_FREE macros indicate the lock-free property of the corresponding atomic types, with the signed and unsigned variants grouped together. The properties also apply to the corresponding (partial) specializations of the atomic template. A value of 0 indicates that the types are never lock-free. A value of 1 indicates that the types are sometimes lock-free. A value of 2 indicates that the types are always lock-free.
2 At least one signed integral specialization of the atomic template, along with the specialization for the corresponding unsigned type (6.8.2), is always lock-free.
[Note 1: This requirement is optional in freestanding implementations (16.4.2.4). end note]
3 The functions atomic<T>::is_lock_free and atomic_is_lock_free (33.5.8.2) indicate whether the object is lock-free. In any given program execution, the result of the lock-free query is the same for all atomic objects of the same type.
4 Atomic operations that are not lock-free are considered to potentially block (6.9.2.3).
5 Recommended practice: Operations that are lock-free should also be address-free.313 The implementation of
these operations should not depend on any per-process state.
[Note 2: This restriction enables communication by memory that is mapped into a process more than once and by memory that is shared between two processes. end note]
33.5.6 Waiting and notifying [atomics.wait]
1 Atomic waiting operations and atomic notifying operations provide a mechanism to wait for the value of an atomic object to change more efficiently than can be achieved with polling. An atomic waiting operation may block until it is unblocked by an atomic notifying operation, according to each functions effects.
[Note 1: Programs are not guaranteed to observe transient atomic values, an issue known as the A-B-A problem, resulting in continued blocking if a condition is only temporarily met. end note]
2 [Note 2: The following functions are atomic waiting operations:
end note]
3 [Note 3: The following functions are atomic notifying operations:
end note]
4 A call to an atomic waiting operation on an atomic object M is eligible to be unblocked by a call to an atomic
notifying operation on M if there exist side effects X and Y on M such that:
 the atomic waiting operation has blocked after observing the result of X,  X precedes Y in the modification order of M, and
 Y happens before the call to the atomic notifying operation.
313) That is, atomic operations on the same memory location via two different addresses will communicate atomically.
§ 33.5.6 1681
 
© ISO/IEC
N4910
33.5.7 Class template atomic_ref 33.5.7.1 General
  namespace std {
    template<class T> struct atomic_ref {
    private:
T* ptr; // exposition only public:
[atomics.ref.generic] [atomics.ref.generic.general]
using value_type = T;
static constexpr size_t required_alignment = implementation-defined;
static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const noexcept;
         explicit atomic_ref(T&);
         atomic_ref(const atomic_ref&) noexcept;
         atomic_ref& operator=(const atomic_ref&) = delete;
         void store(T, memory_order = memory_order::seq_cst) const noexcept;
         T operator=(T) const noexcept;
         T load(memory_order = memory_order::seq_cst) const noexcept;
         operator T() const noexcept;
         T exchange(T, memory_order = memory_order::seq_cst) const noexcept;
         bool compare_exchange_weak(T&, T,
                                    memory_order, memory_order) const noexcept;
         bool compare_exchange_strong(T&, T,
                                      memory_order, memory_order) const noexcept;
         bool compare_exchange_weak(T&, T,
                                    memory_order = memory_order::seq_cst) const noexcept;
         bool compare_exchange_strong(T&, T,
                                      memory_order = memory_order::seq_cst) const noexcept;
         void wait(T, memory_order = memory_order::seq_cst) const noexcept;
         void notify_one() const noexcept;
         void notify_all() const noexcept;
}; }
1 An atomic_ref object applies atomic operations (33.5.1) to the object referenced by *ptr such that, for the lifetime (6.7.3) of the atomic_ref object, the object referenced by *ptr is an atomic object (6.9.2.2).
2 The program is ill-formed if is_trivially_copyable_v<T> is false.
3 The lifetime (6.7.3) of an object referenced by *ptr shall exceed the lifetime of all atomic_refs that reference the object. While any atomic_ref instances exist that reference the *ptr object, all accesses to that object shall exclusively occur through those atomic_ref instances. No subobject of the object referenced by atomic_ref shall be concurrently referenced by any other atomic_ref object.
4 Atomic operations applied to an object through a referencing atomic_ref are atomic with respect to atomic operations applied through any other atomic_ref referencing the same object.
[Note 1: Atomic operations or the atomic_ref constructor can acquire a shared resource, such as a lock associated with the referenced object, to enable atomic operations to be applied to the referenced object. end note]
33.5.7.2 Operations [atomics.ref.ops]
   static constexpr size_t required_alignment;
1
2
The alignment required for an object to be referenced by an atomic reference, which is at least alignof(T).
[Note 1: Hardware could require an object referenced by an atomic_ref to have stricter alignment (6.7.6) than other objects of type T. Further, whether operations on an atomic_ref are lock-free could depend on the alignment of the referenced object. For example, lock-free operations on std::complex<double> could be supported only if aligned to 2*alignof(double). end note]
§ 33.5.7.2 1682

18
19
Preconditions: The failure argument is neither memory_order::release nor memory_order::acq_- rel.
Effects: Retrieves the value in expected. It then atomically compares the value representation of the value referenced by *ptr for equality with that previously retrieved from expected, and if true, replaces the value referenced by *ptr with that in desired. If and only if the comparison is true, memory is affected according to the value of success, and if the comparison is false, memory is affected
© ISO/IEC N4910
   static constexpr bool is_always_lock_free;
3
4
5 6 7
8
9
10
11
12 13 14
15
16
17
The static data member is_always_lock_free is true if the atomic_ref types operations are always lock-free, and false otherwise.
bool is_lock_free() const noexcept;
Returns: true if operations on all objects of the type atomic_ref<T> are lock-free, false otherwise.
atomic_ref(T& obj);
Preconditions: The referenced object is aligned to required_alignment. Postconditions: *this references obj.
Throws: Nothing.
atomic_ref(const atomic_ref& ref) noexcept;
Postconditions: *this references the object referenced by ref.
void store(T desired, memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: The order argument is neither memory_order::consume, memory_order::acquire, nor memory_order::acq_rel.
Effects: Atomically replaces the value referenced by *ptr with the value of desired. Memory is affected according to the value of order.
T operator=(T desired) const noexcept;
Effects: Equivalent to: store(desired);
  return desired;
T load(memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: The order argument is neither memory_order::release nor memory_order::acq_rel. Effects: Memory is affected according to the value of order.
Returns: Atomically returns the value referenced by *ptr.
operator T() const noexcept;
Effects: Equivalent to: return load();
T exchange(T desired, memory_order order = memory_order::seq_cst) const noexcept;
Effects: Atomically replaces the value referenced by *ptr with desired. Memory is affected according to the value of order. This operation is an atomic read-modify-write operation (6.9.2).
Returns: Atomically returns the value referenced by *ptr immediately before the effects. bool compare_exchange_weak(T& expected, T desired,
                           memory_order success, memory_order failure) const noexcept;
bool compare_exchange_strong(T& expected, T desired,
                             memory_order success, memory_order failure) const noexcept;
bool compare_exchange_weak(T& expected, T desired,
                           memory_order order = memory_order::seq_cst) const noexcept;
bool compare_exchange_strong(T& expected, T desired,
                             memory_order order = memory_order::seq_cst) const noexcept;
§ 33.5.7.2 1683

© ISO/IEC N4910
according to the value of failure. When only one memory_order argument is supplied, the value of success is order, and the value of failure is order except that a value of memory_order::acq_rel shall be replaced by the value memory_order::acquire and a value of memory_order::release shall be replaced by the value memory_order::relaxed. If and only if the comparison is false then, after the atomic operation, the value in expected is replaced by the value read from the value referenced by *ptr during the atomic comparison. If the operation returns true, these operations are atomic read-modify-write operations (6.9.2.2) on the value referenced by *ptr. Otherwise, these operations are atomic load operations on that memory.
Returns: The result of the comparison.
Remarks: A weak compare-and-exchange operation may fail spuriously. That is, even when the contents of memory referred to by expected and ptr are equal, it may return false and store back to expected the same memory contents that were originally there.
[Note 2: This spurious failure enables implementation of compare-and-exchange on a broader class of machines, e.g., load-locked store-conditional machines. A consequence of spurious failure is that nearly all uses of weak compare-and-exchange will be in a loop. When a compare-and-exchange is in a loop, the weak version will yield better performance on some platforms. When a weak compare-and-exchange would require a loop and a strong one would not, the strong one is preferable. end note]
20 21
22
23 (23.1) (23.2) (23.3) 24
25
26
27
28
1 There are specializations of the atomic_ref class template for the integral types char, signed char, unsigned char, short, unsigned short, int, unsigned int, long, unsigned long, long long, unsigned long long, char8_t, char16_t, char32_t, wchar_t, and any other types needed by the typedefs in the header <cstdint> (17.4.2). For each such type integral , the specialization atomic_ref<integral > provides additional atomic operations appropriate to integral types.
[Note 1: The specialization atomic_ref<bool> uses the primary template (33.5.7). end note]
namespace std {
template<> struct atomic_ref<integral> { private:
integral * ptr; // exposition only public:
using value_type = integral;
using difference_type = value_type;
static constexpr size_t required_alignment = implementation-defined;
static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const noexcept;
void wait(T old, memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: order is neither memory_order::release nor memory_order::acq_rel. Effects: Repeatedly performs the following steps, in order:
 Evaluates load(order) and compares its value representation for equality against that of old.  If they compare unequal, returns.
 Blocks until it is unblocked by an atomic notifying operation or is unblocked spuriously.
Remarks: This function is an atomic waiting operation (33.5.6) on atomic object *ptr. void notify_one() const noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation on *ptr that is eligible to be unblocked (33.5.6) by this call, if any such atomic waiting operations exist.
Remarks: This function is an atomic notifying operation (33.5.6) on atomic object *ptr. void notify_all() const noexcept;
Effects: Unblocks the execution of all atomic waiting operations on *ptr that are eligible to be unblocked (33.5.6) by this call.
Remarks: This function is an atomic notifying operation (33.5.6) on atomic object *ptr.
33.5.7.3 Specializations for integral types [atomics.ref.int]
§ 33.5.7.3 1684

4
5 6
Effects: Atomically replaces the value referenced by *ptr with the result of the computation applied to the value referenced by *ptr and the given operand. Memory is affected according to the value of order. These operations are atomic read-modify-write operations (6.9.2.2).
Returns: Atomically, the value referenced by *ptr immediately before the effects.
Remarks: For signed integer types, the result is as if the object value and parameters were converted to their corresponding unsigned types, the computation performed on those types, and the result converted back to the signed type.
[Note 2: There are no undefined results arising from the computation. end note]
© ISO/IEC
N4910
explicit atomic_ref(integral&);
atomic_ref(const atomic_ref&) noexcept; atomic_ref& operator=(const atomic_ref&) = delete;
void store(integral, memory_order = memory_order::seq_cst) const noexcept; integral operator=(integral) const noexcept;
integral load(memory_order = memory_order::seq_cst) const noexcept; operator integral() const noexcept;
integral exchange(integral,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_weak(integral&, integral,
memory_order, memory_order) const noexcept;
bool compare_exchange_strong(integral&, integral,
memory_order, memory_order) const noexcept;
bool compare_exchange_weak(integral&, integral,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_strong(integral&, integral,
memory_order = memory_order::seq_cst) const noexcept;
integral fetch_add(integral,
memory_order = memory_order::seq_cst) const noexcept;
integral fetch_sub(integral,
memory_order = memory_order::seq_cst) const noexcept;
integral fetch_and(integral,
memory_order = memory_order::seq_cst) const noexcept;
integral fetch_or(integral,
memory_order = memory_order::seq_cst) const noexcept;
integral fetch_xor(integral,
memory_order = memory_order::seq_cst) const noexcept;
integral operator++(int) const noexcept; integral operator--(int) const noexcept; integral operator++() const noexcept; integral operator--() const noexcept; integral operator+=(integral) const noexcept; integral operator-=(integral) const noexcept; integral operator&=(integral) const noexcept; integral operator|=(integral) const noexcept; integral operator^=(integral) const noexcept;
void wait(integral, memory_order = memory_order::seq_cst) const noexcept; void notify_one() const noexcept;
void notify_all() const noexcept;
}; }
2 Descriptions are provided below only for members that differ from the primary template.
3 The following operations perform arithmetic computations. The correspondence among key, operator, and
computation is specified in Table 141.
integral fetch_key(integral operand, memory_order order = memory_order::seq_cst) const noexcept;
§ 33.5.7.3 1685

4
                     memory_order order = memory_order::seq_cst) const noexcept;
Effects: Atomically replaces the value referenced by *ptr with the result of the computation applied to the value referenced by *ptr and the given operand. Memory is affected according to the value of order. These operations are atomic read-modify-write operations (6.9.2.2).
© ISO/IEC N4910
integral operator op=(integral operand) const noexcept;
Effects: Equivalent to: return fetch_key(operand) op operand;
33.5.7.4 Specializations for floating-point types [atomics.ref.float]
1 There are specializations of the atomic_ref class template for the floating-point types float, double, and long double. For each such type floating-point , the specialization atomic_ref<floating-point > provides additional atomic operations appropriate to floating-point types.
namespace std {
template<> struct atomic_ref<floating-point> { private:
floating-point * ptr; // exposition only public:
using value_type = floating-point;
using difference_type = value_type;
static constexpr size_t required_alignment = implementation-defined;
static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const noexcept;
explicit atomic_ref(floating-point&); atomic_ref(const atomic_ref&) noexcept; atomic_ref& operator=(const atomic_ref&) = delete;
void store(floating-point, memory_order = memory_order::seq_cst) const noexcept; floating-point operator=(floating-point ) const noexcept;
floating-point load(memory_order = memory_order::seq_cst) const noexcept; operator floating-point() const noexcept;
floating-point exchange(floating-point,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_weak(floating-point&, floating-point, memory_order, memory_order) const noexcept;
bool compare_exchange_strong(floating-point&, floating-point, memory_order, memory_order) const noexcept;
bool compare_exchange_weak(floating-point&, floating-point,
memory_order = memory_order::seq_cst) const noexcept;
bool compare_exchange_strong(floating-point&, floating-point,
memory_order = memory_order::seq_cst) const noexcept;
7
floating-point
floating-point
floating-point
floating-point
fetch_add(floating-point,
memory_order = memory_order::seq_cst) const noexcept;
fetch_sub(floating-point,
memory_order = memory_order::seq_cst) const noexcept;
operator+=(floating-point ) const noexcept; operator-=(floating-point ) const noexcept;
void wait(floating-point, memory_order = memory_order::seq_cst) const noexcept; void notify_one() const noexcept;
void notify_all() const noexcept;
}; }
2 Descriptions are provided below only for members that differ from the primary template.
3 The following operations perform arithmetic computations. The correspondence among key, operator, and
computation is specified in Table 141. floating-point fetch_key(floating-point operand,
§ 33.5.7.4 1686

5 6
7
Returns: Atomically, the value referenced by *ptr immediately before the effects.
Remarks: If the result is not a representable value for its type (7.1), the result is unspecified, but the operations otherwise have no undefined behavior. Atomic arithmetic operations on floating-point should conform to the std::numeric_limits<floating-point > traits associated with the floating- point type (17.3.3). The floating-point environment (28.3) for atomic arithmetic operations on floating-point may be different than the calling threads floating-point environment.
© ISO/IEC N4910
floating-point operator op =(floating-point operand) const noexcept; Effects: Equivalent to: return fetch_key (operand) op operand;
33.5.7.5 Partial specialization for pointers [atomics.ref.pointer]
  namespace std {
    template<class T> struct atomic_ref<T*> {
    private:
T** ptr; // exposition only public:
using value_type = T*;
using difference_type = ptrdiff_t;
static constexpr size_t required_alignment = implementation-defined;
static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const noexcept;
      explicit atomic_ref(T*&);
      atomic_ref(const atomic_ref&) noexcept;
      atomic_ref& operator=(const atomic_ref&) = delete;
      void store(T*, memory_order = memory_order::seq_cst) const noexcept;
      T* operator=(T*) const noexcept;
      T* load(memory_order = memory_order::seq_cst) const noexcept;
      operator T*() const noexcept;
      T* exchange(T*, memory_order = memory_order::seq_cst) const noexcept;
      bool compare_exchange_weak(T*&, T*,
                                 memory_order, memory_order) const noexcept;
      bool compare_exchange_strong(T*&, T*,
                                   memory_order, memory_order) const noexcept;
      bool compare_exchange_weak(T*&, T*,
                                 memory_order = memory_order::seq_cst) const noexcept;
      bool compare_exchange_strong(T*&, T*,
                                   memory_order = memory_order::seq_cst) const noexcept;
      T* fetch_add(difference_type, memory_order = memory_order::seq_cst) const noexcept;
      T* fetch_sub(difference_type, memory_order = memory_order::seq_cst) const noexcept;
      T* operator++(int) const noexcept;
      T* operator--(int) const noexcept;
      T* operator++() const noexcept;
      T* operator--() const noexcept;
      T* operator+=(difference_type) const noexcept;
      T* operator-=(difference_type) const noexcept;
      void wait(T*, memory_order = memory_order::seq_cst) const noexcept;
      void notify_one() const noexcept;
      void notify_all() const noexcept;
}; }
1 Descriptions are provided below only for members that differ from the primary template.
2 The following operations perform arithmetic computations. The correspondence among key, operator, and
computation is specified in Table 142.
§ 33.5.7.5 1687

3 4
5 6
7
1
2
3
4
T* fetch_key(difference_type operand, memory_order order = memory_order::seq_cst) const noexcept;
Mandates: T is a complete object type.
Effects: Atomically replaces the value referenced by *ptr with the result of the computation applied to the value referenced by *ptr and the given operand. Memory is affected according to the value of order. These operations are atomic read-modify-write operations (6.9.2.2).
Returns: Atomically, the value referenced by *ptr immediately before the effects.
Remarks: The result may be an undefined address, but the operations otherwise have no undefined
behavior.
© ISO/IEC N4910
T* operator op=(difference_type operand) const noexcept;
Effects: Equivalent to: return fetch_key (operand) op operand;
33.5.7.6 Member operators common to integers and pointers to objects[atomics.ref.memop]
value_type operator++(int) const noexcept;
Effects: Equivalent to: return fetch_add(1); value_type operator--(int) const noexcept;
Effects: Equivalent to: return fetch_sub(1); value_type operator++() const noexcept;
Effects: Equivalent to: return fetch_add(1) + 1; value_type operator--() const noexcept;
Effects: Equivalent to: return fetch_sub(1) - 1;
33.5.8 Class template atomic 33.5.8.1 General
  namespace std {
    template<class T> struct atomic {
      using value_type = T;
[atomics.types.generic] [atomics.types.generic.general]
static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const volatile noexcept;
bool is_lock_free() const noexcept;
// 33.5.8.2, operations on atomic types
constexpr atomic() noexcept(is_nothrow_default_constructible_v<T>); constexpr atomic(T) noexcept;
atomic(const atomic&) = delete;
atomic& operator=(const atomic&) = delete;
atomic& operator=(const atomic&) volatile = delete;
T load(memory_order = memory_order::seq_cst) const volatile noexcept;
T load(memory_order = memory_order::seq_cst) const noexcept;
operator T() const volatile noexcept;
operator T() const noexcept;
void store(T, memory_order = memory_order::seq_cst) volatile noexcept;
void store(T, memory_order = memory_order::seq_cst) noexcept;
T operator=(T) volatile noexcept;
T operator=(T) noexcept;
T exchange(T, memory_order = memory_order::seq_cst) volatile noexcept;
T exchange(T, memory_order = memory_order::seq_cst) noexcept;
bool compare_exchange_weak(T&, T, memory_order, memory_order) volatile noexcept;
bool compare_exchange_weak(T&, T, memory_order, memory_order) noexcept;
bool compare_exchange_strong(T&, T, memory_order, memory_order) volatile noexcept;
bool compare_exchange_strong(T&, T, memory_order, memory_order) noexcept;
bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) volatile noexcept;
bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) noexcept;
§ 33.5.8.1 1688

(1.1) (1.2) (1.3) (1.4) (1.5)
 is_trivially_copyable_v<T>,  is_copy_constructible_v<T>,  is_move_constructible_v<T>,  is_copy_assignable_v<T>, or  is_move_assignable_v<T>
© ISO/IEC N4910
  bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) volatile noexcept;
  bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) noexcept;
  void wait(T, memory_order = memory_order::seq_cst) const volatile noexcept;
  void wait(T, memory_order = memory_order::seq_cst) const noexcept;
  void notify_one() volatile noexcept;
  void notify_one() noexcept;
  void notify_all() volatile noexcept;
  void notify_all() noexcept;
};
}
1 The template argument for T shall meet the Cpp17CopyConstructible and Cpp17CopyAssignable requirements. The program is ill-formed if any of
is false.
[Note 1: Type arguments that are not also statically initializable can be difficult to use. end note]
2 The specialization atomic<bool> is a standard-layout struct.
3 [Note 2: The representation of an atomic specialization need not have the same size and alignment requirement as its corresponding argument type. end note]
33.5.8.2 Operations on atomic types [atomics.types.operations]
   constexpr atomic() noexcept(is_nothrow_default_constructible_v<T>);
1 2
3
4
5
6 7
Mandates: is_default_constructible_v<T> is true.
Effects: Initializes the atomic object with the value of T(). Initialization is not an atomic operation
(6.9.2).
constexpr atomic(T desired) noexcept;
Effects: Initializes the object with the value desired. Initialization is not an atomic operation (6.9.2).
[Note 1: It is possible to have an access to an atomic object A race with its construction, for example by communicating the address of the just-constructed object A to another thread via memory_order::relaxed operations on a suitable atomic pointer variable, and then immediately accessing A in the receiving thread. This results in undefined behavior. end note]
static constexpr bool is_always_lock_free = implementation-defined;
The static data member is_always_lock_free is true if the atomic types operations are always
lock-free, and false otherwise.
[Note 2: The value of is_always_lock_free is consistent with the value of the corresponding ATOMIC_..._-
LOCK_FREE macro, if defined. end note] bool is_lock_free() const volatile noexcept;
bool is_lock_free() const noexcept;
Returns: true if the objects operations are lock-free, false otherwise.
[Note 3: The return value of the is_lock_free member function is consistent with the value of is_always_-
lock_free for the same type. end note]
void store(T desired, memory_order order = memory_order::seq_cst) volatile noexcept;
void store(T desired, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the volatile overload of this function, is_always_lock_free is true. Preconditions: The order argument is neither memory_order::consume, memory_order::acquire,
nor memory_order::acq_rel.
§ 33.5.8.2 1689

8
9 10 11
12 13 14 15
16 17
18 19
20
Effects: Atomically replaces the value pointed to by this with the value of desired. Memory is affected according to the value of order.
21 22
23
Constraints: For the volatile overload of this function, is_always_lock_free is true. Preconditions: The failure argument is neither memory_order::release nor memory_order::acq_-
rel.
Effects: Retrieves the value in expected. It then atomically compares the value representation of the value pointed to by this for equality with that previously retrieved from expected, and if true, replaces the value pointed to by this with that in desired. If and only if the comparison is true, memory is affected according to the value of success, and if the comparison is false, memory is affected according to the value of failure. When only one memory_order argument is supplied, the value of success is order, and the value of failure is order except that a value of memory_order::acq_rel shall be replaced by the value memory_order::acquire and a value of memory_order::release shall be replaced by the value memory_order::relaxed. If and only if the comparison is false then, after the atomic operation, the value in expected is replaced by the value pointed to by this during the atomic comparison. If the operation returns true, these operations are atomic read-modify-write
© ISO/IEC N4910
T operator=(T desired) volatile noexcept;
T operator=(T desired) noexcept;
Constraints: For the volatile overload of this function, is_always_lock_free is true. Effects: Equivalent to store(desired).
Returns: desired.
T load(memory_order order = memory_order::seq_cst) const volatile noexcept;
T load(memory_order order = memory_order::seq_cst) const noexcept;
Constraints: For the volatile overload of this function, is_always_lock_free is true. Preconditions: The order argument is neither memory_order::release nor memory_order::acq_rel. Effects: Memory is affected according to the value of order.
Returns: Atomically returns the value pointed to by this.
operator T() const volatile noexcept;
operator T() const noexcept;
Constraints: For the volatile overload of this function, is_always_lock_free is true. Effects: Equivalent to: return load();
T exchange(T desired, memory_order order = memory_order::seq_cst) volatile noexcept;
T exchange(T desired, memory_order order = memory_order::seq_cst) noexcept;
Constraints: For the volatile overload of this function, is_always_lock_free is true.
Effects: Atomically replaces the value pointed to by this with desired. Memory is affected according
to the value of order. These operations are atomic read-modify-write operations (6.9.2). Returns: Atomically returns the value pointed to by this immediately before the effects.
bool compare_exchange_weak(T& expected, T desired,
                           memory_order success, memory_order failure) volatile noexcept;
bool compare_exchange_weak(T& expected, T desired,
                           memory_order success, memory_order failure) noexcept;
bool compare_exchange_strong(T& expected, T desired,
                             memory_order success, memory_order failure) volatile noexcept;
bool compare_exchange_strong(T& expected, T desired,
                             memory_order success, memory_order failure) noexcept;
bool compare_exchange_weak(T& expected, T desired,
                           memory_order order = memory_order::seq_cst) volatile noexcept;
bool compare_exchange_weak(T& expected, T desired,
                           memory_order order = memory_order::seq_cst) noexcept;
bool compare_exchange_strong(T& expected, T desired,
                             memory_order order = memory_order::seq_cst) volatile noexcept;
bool compare_exchange_strong(T& expected, T desired,
                             memory_order order = memory_order::seq_cst) noexcept;
§ 33.5.8.2 1690

24 25
operations (6.9.2) on the memory pointed to by this. Otherwise, these operations are atomic load operations on that memory.
Returns: The result of the comparison.
[Note 4: For example, the effect of compare_exchange_strong on objects without padding bits (6.8.1) is
  if (memcmp(this, &expected, sizeof(*this)) == 0)
    memcpy(this, &desired, sizeof(*this));
  else
    memcpy(&expected, this, sizeof(*this));
end note]
[Example 1: The expected use of the compare-and-exchange operations is as follows. The compare-and-exchange
operations will update expected when another iteration of the loop is needed.
  expected = current.load();
  do {
    desired = function(expected);
  } while (!current.compare_exchange_weak(expected, desired));
end example]
[Example 2: Because the expected value is updated only on failure, code releasing the memory containing the expected value on success will work. For example, list head insertion will act atomically and would not introduce a data race in the following code:
26
27
28
Implementations should ensure that weak compare-and-exchange operations do not consistently return false unless either the atomic object has value different from expected or there are concurrent modifications to the atomic object.
Remarks: A weak compare-and-exchange operation may fail spuriously. That is, even when the contents of memory referred to by expected and this are equal, it may return false and store back to expected the same memory contents that were originally there.
[Note 5: This spurious failure enables implementation of compare-and-exchange on a broader class of machines, e.g., load-locked store-conditional machines. A consequence of spurious failure is that nearly all uses of weak compare-and-exchange will be in a loop. When a compare-and-exchange is in a loop, the weak version will yield better performance on some platforms. When a weak compare-and-exchange would require a loop and a strong one would not, the strong one is preferable. end note]
[Note 6 : Under cases where the memcpy and memcmp semantics of the compare-and-exchange operations apply, the comparisons can fail for values that compare equal with operator== if the value representation has trap bits or alternate representations of the same value. Notably, on implementations conforming to ISO/IEC/IEEE 60559, floating-point -0.0 and +0.0 will not compare equal with memcmp but will compare equal with operator==, and NaNs with the same payload will compare equal with memcmp but will not compare equal with operator==.
end note]
[Note 7: Because compare-and-exchange acts on an objects value representation, padding bits that never participate in the objects value representation are ignored. As a consequence, the following code is guaranteed to avoid spurious failure:
struct padded {
char clank = 0x42;
// Padding here.
unsigned biff = 0xC0DEFEFE;
   };
   atomic<padded> pad = {};
   bool zap() {
     padded expected, desired{0, 0};
     return pad.compare_exchange_strong(expected, desired);
}
end note]
© ISO/IEC N4910
  do {
    p->next = head;
  } while (!head.compare_exchange_weak(p->next, p));
end example]
// make new list node point to the current head // try to insert
§ 33.5.8.2 1691

© ISO/IEC N4910
[Note 8: For a union with bits that participate in the value representation of some members but not others, compare-and-exchange might always fail. This is because such padding bits have an indeterminate value when they do not participate in the value representation of the active member. As a consequence, the following code is not guaranteed to ever succeed:
union pony {
double celestia = 0.; short luna; // padded
       };
       atomic<pony> princesses = {};
       bool party(pony desired) {
         pony expected;
         return princesses.compare_exchange_strong(expected, desired);
}
end note]
void wait(T old, memory_order order = memory_order::seq_cst) const volatile noexcept;
void wait(T old, memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: order is neither memory_order::release nor memory_order::acq_rel. Effects: Repeatedly performs the following steps, in order:
 Evaluates load(order) and compares its value representation for equality against that of old.  If they compare unequal, returns.
 Blocks until it is unblocked by an atomic notifying operation or is unblocked spuriously.
29
30 (30.1) (30.2) (30.3) 31
32
33
34
35
1 There are specializations of the atomic class template for the integral types char, signed char, unsigned char, short, unsigned short, int, unsigned int, long, unsigned long, long long, unsigned long long, char8_t, char16_t, char32_t, wchar_t, and any other types needed by the typedefs in the header
<cstdint> (17.4.2). For each such type integral , the specialization atomic<integral > provides additional atomic operations appropriate to integral types.
[Note 1: The specialization atomic<bool> uses the primary template (33.5.8). end note]
namespace std {
template<> struct atomic<integral> {
using value_type = integral;
using difference_type = value_type;
static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const volatile noexcept;
bool is_lock_free() const noexcept;
constexpr atomic() noexcept;
constexpr atomic(integral) noexcept; atomic(const atomic&) = delete;
atomic& operator=(const atomic&) = delete;
Remarks: This function is an atomic waiting operation (33.5.6). void notify_one() volatile noexcept;
void notify_one() noexcept;
Effects: Unblocks the execution of at least one atomic waiting operation that is eligible to be unblocked (33.5.6) by this call, if any such atomic waiting operations exist.
Remarks: This function is an atomic notifying operation (33.5.6). void notify_all() volatile noexcept;
void notify_all() noexcept;
Effects: Unblocks the execution of all atomic waiting operations that are eligible to be unblocked (33.5.6) by this call.
Remarks: This function is an atomic notifying operation (33.5.6).
33.5.8.3 Specializations for integers [atomics.types.int]
§ 33.5.8.3 1692

© ISO/IEC N4910
atomic& operator=(const atomic&) volatile = delete;
void store(integral, memory_order = memory_order::seq_cst) volatile noexcept; void store(integral, memory_order = memory_order::seq_cst) noexcept;
integral operator=(integral) volatile noexcept;
integral operator=(integral) noexcept;
integral load(memory_order = memory_order::seq_cst) const volatile noexcept; integral load(memory_order = memory_order::seq_cst) const noexcept;
operator integral() const volatile noexcept;
operator integral() const noexcept;
integral exchange(integral, memory_order = memory_order::seq_cst) volatile noexcept; integral exchange(integral, memory_order = memory_order::seq_cst) noexcept;
bool compare_exchange_weak(integral&, integral,
memory_order, memory_order) volatile noexcept; bool compare_exchange_weak(integral&, integral,
memory_order, memory_order) noexcept; bool compare_exchange_strong(integral&, integral,
memory_order, memory_order) volatile noexcept; bool compare_exchange_strong(integral&, integral,
memory_order, memory_order) noexcept; bool compare_exchange_weak(integral&, integral,
memory_order = memory_order::seq_cst) volatile noexcept; bool compare_exchange_weak(integral&, integral,
memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_strong(integral&, integral,
memory_order = memory_order::seq_cst) volatile noexcept; bool compare_exchange_strong(integral&, integral,
                             memory_order = memory_order::seq_cst) noexcept;
integral fetch_add(integral, memory_order = memory_order::seq_cst) volatile noexcept; integral fetch_add(integral, memory_order = memory_order::seq_cst) noexcept;
integral fetch_sub(integral, memory_order = memory_order::seq_cst) volatile noexcept; integral fetch_sub(integral, memory_order = memory_order::seq_cst) noexcept;
integral fetch_and(integral, memory_order = memory_order::seq_cst) volatile noexcept; integral fetch_and(integral, memory_order = memory_order::seq_cst) noexcept;
integral fetch_or(integral, memory_order = memory_order::seq_cst) volatile noexcept; integral fetch_or(integral, memory_order = memory_order::seq_cst) noexcept;
integral fetch_xor(integral, memory_order = memory_order::seq_cst) volatile noexcept; integral fetch_xor(integral, memory_order = memory_order::seq_cst) noexcept;
integral operator++(int) volatile noexcept; integral operator++(int) noexcept;
integral operator--(int) volatile noexcept; integral operator--(int) noexcept;
integral operator++() volatile noexcept; integral operator++() noexcept;
integral operator--() volatile noexcept; integral operator--() noexcept;
integral operator+=(integral) volatile noexcept; integral operator+=(integral) noexcept;
integral operator-=(integral) volatile noexcept; integral operator-=(integral) noexcept;
integral operator&=(integral) volatile noexcept; integral operator&=(integral) noexcept;
integral operator|=(integral) volatile noexcept; integral operator|=(integral) noexcept;
integral operator^=(integral) volatile noexcept; integral operator^=(integral) noexcept;
void wait(integral, memory_order = memory_order::seq_cst) const volatile noexcept; void wait(integral, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() volatile noexcept;
void notify_one() noexcept;
§ 33.5.8.3 1693

© ISO/IEC N4910
  void notify_all() volatile noexcept;
  void notify_all() noexcept;
};
}
2 The atomic integral specializations are standard-layout structs. They each have a trivial destructor.
3 Descriptions are provided below only for members that differ from the primary template.
4 The following operations perform arithmetic computations. The correspondence among key, operator, and computation is specified in Table 141.
Table 141: Atomic arithmetic computations [tab:atomic.types.int.comp]
T fetch_key(T operand, memory_order order = memory_order::seq_cst) volatile noexcept; T fetch_key(T operand, memory_order order = memory_order::seq_cst) noexcept;
   key Op Computation
 key Op Computation
  add + or | and &
addition
bitwise inclusive or bitwise and
     sub - subtraction
xor ^ bitwise exclusive or
    Constraints: For the volatile overload of this function, is_always_lock_free is true.
Effects: Atomically replaces the value pointed to by this with the result of the computation applied to the value pointed to by this and the given operand. Memory is affected according to the value of order. These operations are atomic read-modify-write operations (6.9.2).
Returns: Atomically, the value pointed to by this immediately before the effects.
Remarks: For signed integer types, the result is as if the object value and parameters were converted to their corresponding unsigned types, the computation performed on those types, and the result converted back to the signed type.
5 6
7 8
9 10
1 There are specializations of the atomic class template for the floating-point types float, double, and long double. For each such type floating-point, the specialization atomic<floating-point > provides additional atomic operations appropriate to floating-point types.
namespace std {
template<> struct atomic<floating-point> {
using value_type = floating-point; using difference_type = value_type;
static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const volatile noexcept;
bool is_lock_free() const noexcept;
constexpr atomic() noexcept;
constexpr atomic(floating-point) noexcept; atomic(const atomic&) = delete;
atomic& operator=(const atomic&) = delete;
atomic& operator=(const atomic&) volatile = delete;
void store(floating-point, memory_order = memory_order::seq_cst) volatile noexcept; void store(floating-point, memory_order = memory_order::seq_cst) noexcept; floating-point operator=(floating-point) volatile noexcept;
floating-point operator=(floating-point) noexcept;
[Note 2: There are no undefined results arising from the computation. end note] T operator op=(T operand) volatile noexcept;
T operator op=(T operand) noexcept;
Constraints: For the volatile overload of this function, is_always_lock_free is true. Effects: Equivalent to: return fetch_key (operand) op operand;
33.5.8.4 Specializations for floating-point types [atomics.types.float]
§ 33.5.8.4 1694

5 6
7
Constraints: For the volatile overload of this function, is_always_lock_free is true.
Effects: Atomically replaces the value pointed to by this with the result of the computation applied to the value pointed to by this and the given operand. Memory is affected according to the value of order. These operations are atomic read-modify-write operations (6.9.2).
© ISO/IEC
N4910
floating-point load(memory_order = memory_order::seq_cst) volatile noexcept; floating-point load(memory_order = memory_order::seq_cst) noexcept;
operator floating-point() volatile noexcept;
operator floating-point() noexcept;
floating-point exchange(floating-point,
memory_order = memory_order::seq_cst) volatile noexcept;
floating-point exchange(floating-point,
memory_order = memory_order::seq_cst) noexcept;
bool compare_exchange_weak(floating-point&, floating-point,
memory_order, memory_order) volatile noexcept;
bool compare_exchange_weak(floating-point&, floating-point, memory_order, memory_order) noexcept;
bool compare_exchange_strong(floating-point&, floating-point,
memory_order, memory_order) volatile noexcept;
bool compare_exchange_strong(floating-point&, floating-point, memory_order, memory_order) noexcept;
bool compare_exchange_weak(floating-point&, floating-point,
memory_order = memory_order::seq_cst) volatile noexcept;
bool compare_exchange_weak(floating-point&, floating-point,
memory_order = memory_order::seq_cst) noexcept;
bool compare_exchange_strong(floating-point&, floating-point,
memory_order = memory_order::seq_cst) volatile noexcept;
bool compare_exchange_strong(floating-point&, floating-point,
memory_order = memory_order::seq_cst) noexcept;
floating-point fetch_add(floating-point,
memory_order = memory_order::seq_cst) volatile noexcept;
floating-point fetch_add(floating-point,
memory_order = memory_order::seq_cst) noexcept;
floating-point fetch_sub(floating-point,
memory_order = memory_order::seq_cst) volatile noexcept;
floating-point fetch_sub(floating-point,
memory_order = memory_order::seq_cst) noexcept;
floating-point operator+=(floating-point) volatile noexcept; floating-point operator+=(floating-point) noexcept; floating-point operator-=(floating-point) volatile noexcept; floating-point operator-=(floating-point) noexcept;
void wait(floating-point, memory_order = memory_order::seq_cst) const volatile noexcept; void wait(floating-point, memory_order = memory_order::seq_cst) const noexcept;
void notify_one() volatile noexcept;
void notify_one() noexcept;
         void notify_all() volatile noexcept;
         void notify_all() noexcept;
       };
}
2 The atomic floating-point specializations are standard-layout structs. They each have a trivial destructor.
3 Descriptions are provided below only for members that differ from the primary template.
4 The following operations perform arithmetic addition and subtraction computations. The correspondence among key, operator, and computation is specified in Table 141.
T fetch_key(T operand, memory_order order = memory_order::seq_cst) volatile noexcept; T fetch_key(T operand, memory_order order = memory_order::seq_cst) noexcept;
Returns: Atomically, the value pointed to by this immediately before the effects.
§ 33.5.8.4 1695

8
9 10 11
Remarks: If the result is not a representable value for its type (7.1) the result is unspecified, but the operations otherwise have no undefined behavior. Atomic arithmetic operations on floating-point should conform to the std::numeric_limits<floating-point > traits associated with the floating- point type (17.3.3). The floating-point environment (28.3) for atomic arithmetic operations on floating-point may be different than the calling threads floating-point environment.
© ISO/IEC N4910
T operator op=(T operand) volatile noexcept; T operator op=(T operand) noexcept;
Constraints: For the volatile overload of this function, is_always_lock_free is true.
Effects: Equivalent to: return fetch_key (operand) op operand;
Remarks: If the result is not a representable value for its type (7.1) the result is unspecified, but the operations otherwise have no undefined behavior. Atomic arithmetic operations on floating-point should conform to the std::numeric_limits<floating-point > traits associated with the floating- point type (17.3.3). The floating-point environment (28.3) for atomic arithmetic operations on floating-point may be different than the calling threads floating-point environment.
33.5.8.5 Partial specialization for pointers [atomics.types.pointer]
  namespace std {
    template<class T> struct atomic<T*> {
      using value_type = T*;
      using difference_type = ptrdiff_t;
static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const volatile noexcept;
bool is_lock_free() const noexcept;
      constexpr atomic() noexcept;
      constexpr atomic(T*) noexcept;
      atomic(const atomic&) = delete;
      atomic& operator=(const atomic&) = delete;
      atomic& operator=(const atomic&) volatile = delete;
      void store(T*, memory_order = memory_order::seq_cst) volatile noexcept;
      void store(T*, memory_order = memory_order::seq_cst) noexcept;
      T* operator=(T*) volatile noexcept;
      T* operator=(T*) noexcept;
      T* load(memory_order = memory_order::seq_cst) const volatile noexcept;
      T* load(memory_order = memory_order::seq_cst) const noexcept;
      operator T*() const volatile noexcept;
      operator T*() const noexcept;
      T* exchange(T*, memory_order = memory_order::seq_cst) volatile noexcept;
      T* exchange(T*, memory_order = memory_order::seq_cst) noexcept;
      bool compare_exchange_weak(T*&, T*, memory_order, memory_order) volatile noexcept;
      bool compare_exchange_weak(T*&, T*, memory_order, memory_order) noexcept;
      bool compare_exchange_strong(T*&, T*, memory_order, memory_order) volatile noexcept;
      bool compare_exchange_strong(T*&, T*, memory_order, memory_order) noexcept;
      bool compare_exchange_weak(T*&, T*,
                                 memory_order = memory_order::seq_cst) volatile noexcept;
      bool compare_exchange_weak(T*&, T*,
                                 memory_order = memory_order::seq_cst) noexcept;
      bool compare_exchange_strong(T*&, T*,
                                   memory_order = memory_order::seq_cst) volatile noexcept;
      bool compare_exchange_strong(T*&, T*,
                                   memory_order = memory_order::seq_cst) noexcept;
      T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;
      T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept;
      T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept;
      T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept;
§ 33.5.8.5 1696

4 5
6
7 8
9 10
1 2
Constraints: For the volatile overload of this function, is_always_lock_free is true. Mandates: T is a complete object type.
[Note 1: Pointer arithmetic on void* or function pointers is ill-formed. end note]
Effects: Atomically replaces the value pointed to by this with the result of the computation applied to the value pointed to by this and the given operand. Memory is affected according to the value of order. These operations are atomic read-modify-write operations (6.9.2).
Returns: Atomically, the value pointed to by this immediately before the effects.
Remarks: The result may be an undefined address, but the operations otherwise have no undefined
behavior.
© ISO/IEC
N4910
         T* operator++(int) volatile noexcept;
         T* operator++(int) noexcept;
         T* operator--(int) volatile noexcept;
         T* operator--(int) noexcept;
         T* operator++() volatile noexcept;
         T* operator++() noexcept;
         T* operator--() volatile noexcept;
         T* operator--() noexcept;
         T* operator+=(ptrdiff_t) volatile noexcept;
         T* operator+=(ptrdiff_t) noexcept;
         T* operator-=(ptrdiff_t) volatile noexcept;
         T* operator-=(ptrdiff_t) noexcept;
         void wait(T*, memory_order = memory_order::seq_cst) const volatile noexcept;
         void wait(T*, memory_order = memory_order::seq_cst) const noexcept;
         void notify_one() volatile noexcept;
         void notify_one() noexcept;
         void notify_all() volatile noexcept;
         void notify_all() noexcept;
       };
}
1 There is a partial specialization of the atomic class template for pointers. Specializations of this partial specialization are standard-layout structs. They each have a trivial destructor.
2 Descriptions are provided below only for members that differ from the primary template.
3 The following operations perform pointer arithmetic. The correspondence among key, operator, and compu-
tation is specified in Table 142.
Table 142: Atomic pointer computations [tab:atomic.types.pointer.comp]
T* fetch_key(ptrdiff_t operand, memory_order order = memory_order::seq_cst) volatile noexcept; T* fetch_key(ptrdiff_t operand, memory_order order = memory_order::seq_cst) noexcept;
  key Op Computation
  key Op Computation
 add + addition
  sub - subtraction
  T* operator op=(ptrdiff_t operand) volatile noexcept; T* operator op=(ptrdiff_t operand) noexcept;
Constraints: For the volatile overload of this function, is_always_lock_free is true. Effects: Equivalent to: return fetch_key (operand) op operand;
// 33.5.8.6 Member operators common to integers and pointers to objects[atomics.types.memop]
value_type operator++(int) volatile noexcept;
value_type operator++(int) noexcept;
Constraints: For the volatile overload of this function, is_always_lock_free is true. Effects: Equivalent to: return fetch_add(1);
   value_type operator--(int) volatile noexcept;
   value_type operator--(int) noexcept;
// Constraints: For the volatile overload of this function, is_always_lock_free is true. Effects: Equivalent to: return fetch_sub(1);
value_type operator++() volatile noexcept;
value_type operator++() noexcept;
Constraints: For the volatile overload of this function, is_always_lock_free is true. Effects: Equivalent to: return fetch_add(1) + 1;
value_type operator--() volatile noexcept;
value_type operator--() noexcept;
Constraints: For the volatile overload of this function, is_always_lock_free is true. Effects: Equivalent to: return fetch_sub(1) - 1;
// 33.5.8.7 Partial specializations for smart pointers [util.smartptr.atomic]
// 33.5.8.7.1 General [util.smartptr.atomic.general]
// The library provides partial specializations of the atomic template for shared-ownership smart pointers (20.3.2).
// [Note 1: The partial specializations are declared in header <memory> (20.2.2). 
// The behavior of all operations is as specified in 33.5.8, unless specified otherwise. The template parameter T of these partial specializations may be an incomplete type. All changes to an atomic smart pointer in 33.5.8.7, and all associated use_count increments, are guaranteed to be performed atomically. Associated use_count decrements are sequenced after the atomic operation, but are not required to be part of it. Any associated deletion and deallocation are sequenced after the atomic update step and are not part of the atomic operation.
// [Note 2: If the atomic operation uses locks, locks acquired by the implementation will be held when any use_count adjustments are performed, and will not be held when any destruction or deallocation resulting from this is performed. 
// [Example 1:
   template<typename T> class atomic_list {
     struct node {
T t;
       shared_ptr<node> next;
     };
     atomic<shared_ptr<node>> head;
   public:
     auto find(T t) const {
       auto p = head.load();
       while (p && p->t != t)
p = p->next;
       return shared_ptr<node>(move(p));
     }
     void push_front(T t) {
       auto p = make_shared<node>();
       p->t = t;
       p->next = head;
       while (!head.compare_exchange_weak(p->next, p)) {}
} };
atomic(shared_ptr<T> desired) noexcept;
// 33.5.8.7.2 Partial specialization for shared_ptr [util.smartptr.atomic.shared]
  namespace std {
    template<class T> struct atomic<shared_ptr<T>> {
      using value_type = shared_ptr<T>;
static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const noexcept;
      constexpr atomic() noexcept;
      constexpr atomic(nullptr_t) noexcept : atomic() { }
      atomic(shared_ptr<T> desired) noexcept;
      atomic(const atomic&) = delete;
      void operator=(const atomic&) = delete;
      shared_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
      operator shared_ptr<T>() const noexcept;
      void store(shared_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
      void operator=(shared_ptr<T> desired) noexcept;
      shared_ptr<T> exchange(shared_ptr<T> desired,
                             memory_order order = memory_order::seq_cst) noexcept;
      bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
                                 memory_order success, memory_order failure) noexcept;
      bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
                                   memory_order success, memory_order failure) noexcept;
      bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
                                 memory_order order = memory_order::seq_cst) noexcept;
      bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
                                   memory_order order = memory_order::seq_cst) noexcept;
      void wait(shared_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
      void notify_one() noexcept;
      void notify_all() noexcept;
    private:
      shared_ptr<T> p;
}; }
constexpr atomic() noexcept;
// Effects: Initializes p{}.
// exposition only
// Effects: Initializes the object with the value desired. Initialization is not an atomic operation (6.9.2).
// [Note 1: It is possible to have an access to an atomic object A race with its construction, for example, by communicating the address of the just-constructed object A to another thread via memory_order::relaxed operations on a suitable atomic pointer variable, and then immediately accessing A in the receiving thread. This results in undefined behavior. 
void store(shared_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
// Preconditions: order is neither memory_order::consume, memory_order::acquire, nor memory_- order::acq_rel.
// Effects: Atomically replaces the value pointed to by this with the value of desired as if by p.swap(desired). Memory is affected according to the value of order.
void operator=(shared_ptr<T> desired) noexcept;
// Effects: Equivalent to store(desired).
shared_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
// Preconditions: order is neither memory_order::release nor memory_order::acq_rel. Effects: Memory is affected according to the value of order.
// Returns: Atomically returns p. operator shared_ptr<T>() const noexcept;
// Effects: Equivalent to: return load();
shared_ptr<T> exchange(shared_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
// Effects: Atomically replaces p with desired as if by p.swap(desired). Memory is affected according to the value of order. This is an atomic read-modify-write operation (6.9.2.2).
// Returns: Atomically returns the value of p immediately before the effects.
bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
                           memory_order success, memory_order failure) noexcept;
bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
                             memory_order success, memory_order failure) noexcept;
// Preconditions: failure is neither memory_order::release nor memory_order::acq_rel.
// Effects: If p is equivalent to expected, assigns desired to p and has synchronization semantics corresponding to the value of success, otherwise assigns p to expected and has synchronization semantics corresponding to the value of failure.
// Returns: true if p was equivalent to expected, false otherwise.
// Remarks: Two shared_ptr objects are equivalent if they store the same pointer value and either share ownership or are both empty. The weak form may fail spuriously. See 33.5.8.2.
// If the operation returns true, expected is not accessed after the atomic update and the operation is an atomic read-modify-write operation (6.9.2) on the memory pointed to by this. Otherwise, the operation is an atomic load operation on that memory, and expected is updated with the existing value read from the atomic object in the attempted atomic update. The use_count update corresponding to the write to expected is part of the atomic operation. The write to expected itself is not required to be part of the atomic operation.
bool compare_exchange_weak(shared_ptr<T>& expected, shared_ptr<T> desired,
                           memory_order order = memory_order::seq_cst) noexcept;
// Effects: Equivalent to:
return compare_exchange_weak(expected, desired, order, fail_order);
// where fail_order is the same as order except that a value of memory_order::acq_rel shall be replaced by the value memory_order::acquire and a value of memory_order::release shall be replaced by the value memory_order::relaxed.
bool compare_exchange_strong(shared_ptr<T>& expected, shared_ptr<T> desired,
                             memory_order order = memory_order::seq_cst) noexcept;
// Effects: Equivalent to:
return compare_exchange_strong(expected, desired, order, fail_order);
// where fail_order is the same as order except that a value of memory_order::acq_rel shall be replaced by the value memory_order::acquire and a value of memory_order::release shall be replaced by the value memory_order::relaxed.
void wait(shared_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
// Preconditions: order is neither memory_order::release nor memory_order::acq_rel. Effects: Repeatedly performs the following steps, in order:
// — Evaluates load(order) and compares it to old.
// — If the two are not equivalent, returns.
// — Blocks until it is unblocked by an atomic notifying operation or is unblocked spuriously.
// Remarks: Two shared_ptr objects are equivalent if they store the same pointer and either share ownership or are both empty. This function is an atomic waiting operation (33.5.6).
// Effects: Initializes p{}.
   void notify_one() noexcept;
// Effects: Unblocks the execution of at least one atomic waiting operation that is eligible to be unblocked (33.5.6) by this call, if any such atomic waiting operations exist.
// Remarks: This function is an atomic notifying operation (33.5.6). void notify_all() noexcept;
// Effects: Unblocks the execution of all atomic waiting operations that are eligible to be unblocked (33.5.6) by this call.
Remarks: This function is an atomic notifying operation (33.5.6).
// 33.5.8.7.3 Partial specialization for weak_ptr [util.smartptr.atomic.weak]
  namespace std {
    template<class T> struct atomic<weak_ptr<T>> {
      using value_type = weak_ptr<T>;
static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const noexcept;
      constexpr atomic() noexcept;
      atomic(weak_ptr<T> desired) noexcept;
      atomic(const atomic&) = delete;
      void operator=(const atomic&) = delete;
      weak_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
      operator weak_ptr<T>() const noexcept;
      void store(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
      void operator=(weak_ptr<T> desired) noexcept;
      weak_ptr<T> exchange(weak_ptr<T> desired,
                           memory_order order = memory_order::seq_cst) noexcept;
      bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
                                 memory_order success, memory_order failure) noexcept;
      bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
                                   memory_order success, memory_order failure) noexcept;
      bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
                                 memory_order order = memory_order::seq_cst) noexcept;
      bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
                                   memory_order order = memory_order::seq_cst) noexcept;
      void wait(weak_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
      void notify_one() noexcept;
      void notify_all() noexcept;
    private:
      weak_ptr<T> p;
}; }
constexpr atomic() noexcept;
// exposition only
atomic(weak_ptr<T> desired) noexcept;
// Effects: Initializes the object with the value desired. Initialization is not an atomic operation (6.9.2).
// [Note 1: It is possible to have an access to an atomic object A race with its construction, for example, by communicating the address of the just-constructed object A to another thread via memory_order::relaxed operations on a suitable atomic pointer variable, and then immediately accessing A in the receiving thread. This results in undefined behavior. 
   void store(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
// Preconditions: order is neither memory_order::consume, memory_order::acquire, nor memory_- order::acq_rel.
// Effects: Atomically replaces the value pointed to by this with the value of desired as if by p.swap(desired). Memory is affected according to the value of order.
void operator=(weak_ptr<T> desired) noexcept;
// Effects: Equivalent to store(desired).
weak_ptr<T> load(memory_order order = memory_order::seq_cst) const noexcept;
Preconditions: order is neither memory_order::release nor memory_order::acq_rel. Effects: Memory is affected according to the value of order.
// Returns: Atomically returns p.
operator weak_ptr<T>() const noexcept;
// Effects: Equivalent to: return load();
weak_ptr<T> exchange(weak_ptr<T> desired, memory_order order = memory_order::seq_cst) noexcept;
// Effects: Atomically replaces p with desired as if by p.swap(desired). Memory is affected according to the value of order. This is an atomic read-modify-write operation (6.9.2.2).
// Returns: Atomically returns the value of p immediately before the effects.
bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
                           memory_order success, memory_order failure) noexcept;
bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
                             memory_order success, memory_order failure) noexcept;
Preconditions: failure is neither memory_order::release nor memory_order::acq_rel.
// Effects: If p is equivalent to expected, assigns desired to p and has synchronization semantics corresponding to the value of success, otherwise assigns p to expected and has synchronization semantics corresponding to the value of failure.
// Returns: true if p was equivalent to expected, false otherwise.
// Remarks: Two weak_ptr objects are equivalent if they store the same pointer value and either share ownership or are both empty. The weak form may fail spuriously. See 33.5.8.2.
// If the operation returns true, expected is not accessed after the atomic update and the operation is an atomic read-modify-write operation (6.9.2) on the memory pointed to by this. Otherwise, the operation is an atomic load operation on that memory, and expected is updated with the existing value read from the atomic object in the attempted atomic update. The use_count update corresponding to the write to expected is part of the atomic operation. The write to expected itself is not required to be part of the atomic operation.
bool compare_exchange_weak(weak_ptr<T>& expected, weak_ptr<T> desired,
                           memory_order order = memory_order::seq_cst) noexcept;
Effects: Equivalent to:
return compare_exchange_weak(expected, desired, order, fail_order);
where fail_order is the same as order except that a value of memory_order::acq_rel shall be replaced by the value memory_order::acquire and a value of memory_order::release shall be replaced by the value memory_order::relaxed.
bool compare_exchange_strong(weak_ptr<T>& expected, weak_ptr<T> desired,
                             memory_order order = memory_order::seq_cst) noexcept;
// Effects: Equivalent to:
return compare_exchange_strong(expected, desired, order, fail_order);
where fail_order is the same as order except that a value of memory_order::acq_rel shall be replaced by the value memory_order::acquire and a value of memory_order::release shall be replaced by the value memory_order::relaxed.
void wait(weak_ptr<T> old, memory_order order = memory_order::seq_cst) const noexcept;
// Preconditions: order is neither memory_order::release nor memory_order::acq_rel. 
// Effects: Repeatedly performs the following steps, in order:
// — Evaluates load(order) and compares it to old.
// — If the two are not equivalent, returns.
// — Blocks until it is unblocked by an atomic notifying operation or is unblocked spuriously.
// Remarks: Two weak_ptr objects are equivalent if they store the same pointer and either share ownership or are both empty. This function is an atomic waiting operation (33.5.6).
//  A non-member function template whose name matches the pattern atomic_f or the pattern atomic_f _- explicit invokes the member function f , with the value of the first parameter as the object expression and the values of the remaining parameters (if any) as the arguments of the member function call, in order. An argument for a parameter of type atomic<T>::value_type* is dereferenced when passed to the member function call. If no such member function exists, the program is ill-formed.
//  [Note 1: The non-member functions enable programmers to write code that can be compiled as either C or C++, for example in a shared header file. 
void notify_one() noexcept;
// Effects: Unblocks the execution of at least one atomic waiting operation that is eligible to be unblocked (33.5.6) by this call, if any such atomic waiting operations exist.
// Remarks: This function is an atomic notifying operation (33.5.6). void notify_all() noexcept;
// Effects: Unblocks the execution of all atomic waiting operations that are eligible to be unblocked (33.5.6) by this call.
// Remarks: This function is an atomic notifying operation (33.5.6).
// 33.5.9 Non-member functions [atomics.nonmembers]
// 33.5.10 Flag type and operations [atomics.flag]
  namespace std {
    struct atomic_flag {
      constexpr atomic_flag() noexcept;
      atomic_flag(const atomic_flag&) = delete;
      atomic_flag& operator=(const atomic_flag&) = delete;
      atomic_flag& operator=(const atomic_flag&) volatile = delete;
      bool test(memory_order = memory_order::seq_cst) const volatile noexcept;
      bool test(memory_order = memory_order::seq_cst) const noexcept;
      bool test_and_set(memory_order = memory_order::seq_cst) volatile noexcept;
      bool test_and_set(memory_order = memory_order::seq_cst) noexcept;
      void clear(memory_order = memory_order::seq_cst) volatile noexcept;
      void clear(memory_order = memory_order::seq_cst) noexcept;
      void wait(bool, memory_order = memory_order::seq_cst) const volatile noexcept;
      void wait(bool, memory_order = memory_order::seq_cst) const noexcept;
      void notify_one() volatile noexcept;
      void notify_one() noexcept;
      void notify_all() volatile noexcept;
      void notify_all() noexcept;
    };
}
// The atomic_flag type provides the classic test-and-set functionality. It has two states, set and clear.
//  Operations on an object of type atomic_flag shall be lock-free. The operations should also be address-free.
                              memory_order::seq_cst) const noexcept;
For atomic_flag_wait, let order be memory_order::seq_cst. Let flag be object for the non- member functions and this for the member functions.
// Preconditions: order is neither memory_order::release nor memory_order::acq_rel. Effects: Repeatedly performs the following steps, in order:
// — Evaluates flag->test(order) != old.
// — If the result of that evaluation is true, returns.
// — Blocks until it is unblocked by an atomic notifying operation or is unblocked spuriously.
// Effects: Initializes *this to the clear state.
//  The atomic_flag type is a standard-layout struct. It has a trivial destructor. constexpr atomic_flag::atomic_flag() noexcept;
bool atomic_flag_test(const volatile atomic_flag* object) noexcept;
bool atomic_flag_test(const atomic_flag* object) noexcept;
bool atomic_flag_test_explicit(const volatile atomic_flag* object,
                               memory_order order) noexcept;
bool atomic_flag_test_explicit(const atomic_flag* object,
                               memory_order order) noexcept;
bool atomic_flag::test(memory_order order = memory_order::seq_cst) const volatile noexcept;
bool atomic_flag::test(memory_order order = memory_order::seq_cst) const noexcept;
// For atomic_flag_test, let order be memory_order::seq_cst.
// Preconditions: order is neither memory_order::release nor memory_order::acq_rel. Effects: Memory is affected according to the value of order.
// Returns: Atomically returns the value pointed to by object or this.
bool atomic_flag_test_and_set(volatile atomic_flag* object) noexcept;
bool atomic_flag_test_and_set(atomic_flag* object) noexcept;
bool atomic_flag_test_and_set_explicit(volatile atomic_flag* object, memory_order order) noexcept;
bool atomic_flag_test_and_set_explicit(atomic_flag* object, memory_order order) noexcept;
bool atomic_flag::test_and_set(memory_order order = memory_order::seq_cst) volatile noexcept;
bool atomic_flag::test_and_set(memory_order order = memory_order::seq_cst) noexcept;
Effects: Atomically sets the value pointed to by object or by this to true. Memory is affected according to the value of order. These operations are atomic read-modify-write operations (6.9.2).
Returns: Atomically, the value of the object immediately before the effects.
void atomic_flag_clear(volatile atomic_flag* object) noexcept;
void atomic_flag_clear(atomic_flag* object) noexcept;
void atomic_flag_clear_explicit(volatile atomic_flag* object, memory_order order) noexcept;
void atomic_flag_clear_explicit(atomic_flag* object, memory_order order) noexcept;
void atomic_flag::clear(memory_order order = memory_order::seq_cst) volatile noexcept;
void atomic_flag::clear(memory_order order = memory_order::seq_cst) noexcept;
// Preconditions: The order argument is neither memory_order::consume, memory_order::acquire, nor memory_order::acq_rel.
// Effects: Atomically sets the value pointed to by object or by this to false. Memory is affected according to the value of order.
void atomic_flag_wait(const volatile atomic_flag* object, bool old) noexcept;
void atomic_flag_wait(const atomic_flag* object, bool old) noexcept;
void atomic_flag_wait_explicit(const volatile atomic_flag* object,
                               bool old, memory_order order) noexcept;
void atomic_flag_wait_explicit(const atomic_flag* object,
                               bool old, memory_order order) noexcept;
void atomic_flag::wait(bool old, memory_order order =
                                   memory_order::seq_cst) const volatile noexcept;
void atomic_flag::wait(bool old, memory_order order =
// Remarks: This function is an atomic waiting operation (33.5.6).
// Effects: Depending on the value of order, this operation:
// — has no effects, if order == memory_order::relaxed;
// — is an acquire fence, if order == memory_order::acquire or order == memory_order::consume; 
// — is a release fence, if order == memory_order::release;
// — is both an acquire fence and a release fence, if order == memory_order::acq_rel;
// — is a sequentially consistent acquire and release fence, if order == memory_order::seq_cst.
void atomic_flag_notify_one(volatile atomic_flag* object) noexcept;
void atomic_flag_notify_one(atomic_flag* object) noexcept;
void atomic_flag::notify_one() volatile noexcept;
void atomic_flag::notify_one() noexcept;
//  This subclause introduces synchronization primitives called fences. Fences can have acquire semantics, release semantics, or both. A fence with acquire semantics is called an acquire fence. A fence with release semantics is called a release fence.
//  A release fence A synchronizes with an acquire fence B if there exist atomic operations X and Y , both operating on some atomic object M, such that A is sequenced before X, X modifies M, Y is sequenced before B, and Y reads the value written by X or a value written by any side effect in the hypothetical release sequence X would head if it were a release operation.
//  A release fence A synchronizes with an atomic operation B that performs an acquire operation on an atomic object M if there exists an atomic operation X such that A is sequenced before X, X modifies M, and B reads the value written by X or a value written by any side effect in the hypothetical release sequence X would head if it were a release operation.
//  An atomic operation A that is a release operation on an atomic object M synchronizes with an acquire fence B if there exists some atomic operation X on M such that X is sequenced before B and reads the value written by A or a value written by any side effect in the release sequence headed by A.
   extern "C" void atomic_thread_fence(memory_order order) noexcept;
// Effects: Unblocks the execution of at least one atomic waiting operation that is eligible to be unblocked (33.5.6) by this call, if any such atomic waiting operations exist.
Remarks: This function is an atomic notifying operation (33.5.6).
void atomic_flag_notify_all(volatile atomic_flag* object) noexcept;
void atomic_flag_notify_all(atomic_flag* object) noexcept;
void atomic_flag::notify_all() volatile noexcept;
void atomic_flag::notify_all() noexcept;
// Effects: Unblocks the execution of all atomic waiting operations that are eligible to be unblocked (33.5.6) by this call.
Remarks: This function is an atomic notifying operation (33.5.6).
// 33.5.11 Fences [atomics.fences]
extern "C" void atomic_signal_fence(memory_order order) noexcept;
// Effects: Equivalent to atomic_thread_fence(order), except that the resulting ordering constraints are established only between a thread and a signal handler executed in the same thread.
// [Note 1: atomic_signal_fence can be used to specify the order in which actions performed by the thread become visible to the signal handler. Compiler optimizations and reorderings of loads and stores are inhibited in the same way as with atomic_thread_fence, but the hardware fence instructions that atomic_thread_fence would have inserted are not emitted. 
// 33.5.12 C compatibility [stdatomic.h.syn]
The header <stdatomic.h> provides the following definitions: template<class T>
using std-atomic = std::atomic<T>; // exposition only #define _Atomic(T) std-atomic<T>
#define ATOMIC_BOOL_LOCK_FREE see_below 
#define ATOMIC_CHAR_LOCK_FREE see_below 
#define ATOMIC_CHAR16_T_LOCK_FREE see_below 
#define ATOMIC_CHAR32_T_LOCK_FREE see_below 
#define ATOMIC_WCHAR_T_LOCK_FREE see_below 
#define ATOMIC_SHORT_LOCK_FREE see_below 
#define ATOMIC_INT_LOCK_FREE see_below 
#define ATOMIC_LONG_LOCK_FREE see_below 
#define ATOMIC_LLONG_LOCK_FREE see_below 
#define ATOMIC_POINTER_LOCK_FREE see_below
using std::memory_order;
using std::memory_order_relaxed;
using std::memory_order_consume;
using std::memory_order_acquire;
using std::memory_order_release;
using std::memory_order_acq_rel;
using std::memory_order_seq_cst;
using std::atomic_flag;
using std::atomic_bool;
using std::atomic_char;
using std::atomic_schar;
using std::atomic_uchar;
using std::atomic_short;
using std::atomic_ushort;
using std::atomic_int;
using std::atomic_uint;
using std::atomic_long;
using std::atomic_ulong;
using std::atomic_llong;
using std::atomic_ullong;
using std::atomic_char8_t;
using std::atomic_char16_t;
using std::atomic_char32_t;
using std::atomic_wchar_t;
using std::atomic_int8_t;
using std::atomic_uint8_t;
using std::atomic_int16_t;
using std::atomic_uint16_t;
using std::atomic_int32_t;
using std::atomic_uint32_t;
using std::atomic_int64_t;
using std::atomic_uint64_t;
using std::atomic_int_least8_t;
using std::atomic_uint_least8_t;
using std::atomic_int_least16_t;
using std::atomic_uint_least16_t;
using std::atomic_int_least32_t;
using std::atomic_uint_least32_t;
using std::atomic_int_least64_t;
using std::atomic_uint_least64_t;
using std::atomic_int_fast8_t;
using std::atomic_uint_fast8_t;
using std::atomic_int_fast16_t;
using std::atomic_uint_fast16_t;
using std::atomic_int_fast32_t;
using std::atomic_uint_fast32_t;
using std::atomic_int_fast64_t;
using std::atomic_uint_fast64_t;
using std::atomic_intptr_t;
using std::atomic_uintptr_t;
using std::atomic_size_t;
// see // see // see // see // see // see // see
// see // see
// see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see // see
using std::atomic_ptrdiff_t;
using std::atomic_intmax_t;
using std::atomic_uintmax_t;
using std::atomic_is_lock_free;
using std::atomic_load;
using std::atomic_load_explicit;
using std::atomic_store;
// see below // see below // see below
using std::atomic_store_explicit;
using std::atomic_exchange;
using std::atomic_exchange_explicit;
using std::atomic_compare_exchange_strong;
using std::atomic_compare_exchange_strong_explicit;
using std::atomic_compare_exchange_weak;
using std::atomic_compare_exchange_weak_explicit;
using std::atomic_fetch_add;
using std::atomic_fetch_add_explicit;
using std::atomic_fetch_sub;
using std::atomic_fetch_sub_explicit;
using std::atomic_fetch_or;
using std::atomic_fetch_or_explicit;
using std::atomic_fetch_and;
using std::atomic_fetch_and_explicit;
using std::atomic_flag_test_and_set;
using std::atomic_flag_test_and_set_explicit;
using std::atomic_flag_clear;
using std::atomic_flag_clear_explicit;
using std::atomic_thread_fence;
using std::atomic_signal_fence;
// see below // see below // see below // see below // see below // see below // see below // see below // see below // see below // see below // see below // see below // see below // see below // see below // see below // see below // see below // see below // see below // see below // see below
// see below // see below
//  Each using-declaration for some name A in the synopsis above makes available the same entity as std::A declared in <atomic> (33.5.2). Each macro listed above other than _Atomic(T) is defined as in <atomic>. It is unspecified whether <stdatomic.h> makes available any declarations in namespace std.
//  Each of the using-declarations for intN_t, uintN_t, intptr_t, and uintptr_t listed above is defined if and only if the implementation defines the corresponding typedef-name in 33.5.2.
//  Neither the _Atomic macro, nor any of the non-macro global namespace declarations, are provided by any C++ standard library header other than <stdatomic.h>.
// Recommended practice: Implementations should ensure that C and C++ representations of atomic objects are compatible, so that the same object can be accessed as both an _Atomic(T) from C code and an atomic<T> from C++ code. The representations should be the same, and the mechanisms used to ensure atomicity and memory ordering should be compatible.
int main() {
    cout  <<  n4910 << endl;
    return EXIT_SUCCESS;
}

編纂・実行結果(compile and go)

bash

検討事項(agenda)

コンパイルエラーを取るか、コンパイルエラーの理由を解説する。

応用例1 AUTOSAR C++

AUTOSARでC++のコーディング標準を作っている。 
MISRA-C++コーディング標準の改訂をまたずに、C++14に対応したかったためかもしれない。 

Autosar Guidelines C++14 example code compile list

MISRA C++, AUTOSAR C++について

応用例2 MISRA C/C++

MISRA C まとめ #include

MISRA C++ 5-0-16

応用例3 CERT C/C++

SEI CERT C++ Coding Standard AA. Bibliography 確認中。

MISRA C/C++, AUTOSAR C++, CERT C/C++とC/C++工業標準をコンパイルする

応用例4 箱庭 

箱庭もくもく会

第11回 未定

箱庭ではUnityをはじめC++を使っているらしい。 

ここでコンパイルしたコードと同じようなコードを使っているか、
ここで出たコンパイルエラーと同じようなエラーがでたかを
いろいろな版のコンパイラでコンパイルして確認していく。

この項目は、箱庭プロジェクトを市場分析の対象として、原則的には、箱庭プロジェクトの外部から分析し、外部から箱庭の広告宣伝のための戦略会議を仮想した結果、仮想「箱庭もくもく会」を開催してみることを企画するものである。 
一切の内容は、箱庭プロジェクト、Athrill, TOPPERSとは無関係である。 
一(いち)参加データアナリストの、個人的なつぶやきです。

仮想戦略会議「箱庭」

お盆には「箱庭」記事を書きましょう「もくもく会」の題材になる(1)

お盆には「箱庭」記事を書きましょう「もくもく会」の題材になる(2)

自己参考資料(self reference)

関連する自己参照以外は、こちらの先頭に移転。

C言語(C++)に対する誤解、曲解、無理解、爽快。

#include "N4910.h"

C++N4910資料の改善点

dockerにclang

docker gnu(gcc/g++) and llvm(clang/clang++)

コンパイル用shell script C版(clangとgcc)とC++版(clang++とg++)

C++N4910:2022 tag follower 300人超えました。ありがとうございます。

astyle 使ってみた

<この記事は個人の過去の経験に基づく個人の感想です。現在所属する組織、業務とは関係がありません。>

文書履歴(document history)

ver. 0.01 初稿  20220921

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0