Skip to content

Commit 8fbedb6

Browse files
committed
Enforce canonical Arm identity
1 parent a1bb058 commit 8fbedb6

File tree

1 file changed

+12
-13
lines changed

1 file changed

+12
-13
lines changed

concurrency-primer.tex

Lines changed: 12 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -67,16 +67,15 @@
6767

6868
\newcommand{\codesize}{\fontsize{\bodyfontsize}{\bodybaselineskip}}
6969

70-
% Syntax highlighting for ARM asm (minted doesn't do this well)
70+
% Syntax highlighting for Arm asm (minted doesn't do this well)
7171
\usepackage{listings}
7272
\lstset{
7373
basicstyle=\ttfamily\codesize\selectfont,
7474
keywordstyle=\color{darkGreen}\bfseries,
7575
commentstyle=\textcolor[rgb]{0.25,0.50,0.50}
7676
}
77-
% listings definitions for ARM assembly.
78-
% Get them from https://github.com/frosc/arm-assembler-latex-listings,
79-
% install as shown at http://tex.stackexchange.com/a/1138/92465
77+
% listings definitions for Arm assembly.
78+
% Get them from https://github.com/sysprog21/arm-assembler-latex-listings .
8079
\usepackage{lstlangarm} % See above
8180

8281
\usepackage{changepage} % For adjustwidth
@@ -588,7 +587,7 @@ \section{Sequential consistency on weakly-ordered hardware}
588587
or \introduce{memory models}.
589588
For example, x64 is relatively \introduce{strongly-ordered},
590589
and can be trusted to preserve some system-wide order of loads and stores in most cases.
591-
Other architectures like \textsc{arm} are \introduce{weakly-ordered},
590+
Other architectures like \textsc{Arm} are \introduce{weakly-ordered},
592591
so you can not assume that loads and stores are executed in program order unless the \textsc{cpu} is given special instructions---
593592
called \introduce{memory barriers}---to not shuffle them around.
594593
@@ -597,7 +596,7 @@ \section{Sequential consistency on weakly-ordered hardware}
597596
and to see why the \clang{} and \cplusplus{} concurrency models were designed as they were.\punckern\footnote{%
598597
It is worth noting that the concepts we discuss here are not specific to \clang{} and \cplusplus{}.
599598
Other systems programming languages like D and Rust have converged on similar models.}
600-
Let's examine \textsc{arm}, since it is both popular and straightforward.
599+
Let's examine \textsc{Arm}, since it is both popular and straightforward.
601600
Consider the simplest atomic operations: loads and stores.
602601
Given some \mintinline{cpp}{atomic_int foo},
603602
% Shield your eyes.
@@ -667,8 +666,8 @@ \section{Implementing atomic read-modify-write operations with LL/SC instruction
667666
668667
Like many other \textsc{risc}\footnote{%
669668
\introduce{Reduced instruction set computer},
670-
in contrast to a \introduce{complex instruction set computer} \textsc{(cisc)} architecture like x64.}
671-
architectures, \textsc{arm} lacks dedicated \textsc{rmw} instructions.
669+
in contrast to a \introduce{complex instruction set computer} \textsc{(cisc)} architecture like x64.} architectures,
670+
\textsc{Arm} lacks dedicated \textsc{rmw} instructions.
672671
And since the processor can context switch to another thread at any time,
673672
we can not build \textsc{rmw} ops from normal loads and stores.
674673
Instead, we need special instructions:
@@ -677,7 +676,7 @@ \section{Implementing atomic read-modify-write operations with LL/SC instruction
677676
A load-link reads a value from an address---like any other load---but also instructs the processor to monitor that address.
678677
Store-conditional writes the given value \emph{only if} no other stores were made to that address since the corresponding load-link.
679678
Let's see them in action with an atomic fetch and add.
680-
On \textsc{arm},
679+
On \textsc{Arm},
681680
\begin{colfigure}
682681
\begin{minted}[fontsize=\codesize]{cpp}
683682
void incFoo() { ++foo; }
@@ -752,7 +751,7 @@ \section{Do we always need sequentially consistent operations?}
752751
\label{lock-example}
753752
754753
All of our examples so far have been sequentially consistent to prevent reorderings that break our code.
755-
We've also seen how weakly-ordered architectures like \textsc{arm} use memory barriers to create sequential consistency.
754+
We have also seen how weakly-ordered architectures like \textsc{Arm} use memory barriers to create sequential consistency.
756755
But as you might expect,
757756
these barriers can have a noticeable impact on performance.
758757
After all,
@@ -1083,7 +1082,7 @@ \subsection{Consume}
10831082
}
10841083
\end{minted}
10851084
\end{colfigure}
1086-
and an \textsc{arm} compiler could emit:
1085+
and an \textsc{Arm} compiler could emit:
10871086
\begin{colfigure}
10881087
\begin{lstlisting}[language={[ARM]Assembler}]
10891088
ldr r3, &peripherals
@@ -1130,10 +1129,10 @@ \subsection{\textsc{Hc Svnt Dracones}}
11301129
11311130
\section{Hardware convergence}
11321131
1133-
Those familiar with \textsc{arm} may have noticed that all assembly shown here is for the seventh version of the architecture.
1132+
Those familiar with \textsc{Arm} may have noticed that all assembly shown here is for the seventh version of the architecture.
11341133
Excitingly, the eighth generation offers massive improvements for lockless code.
11351134
Since most programming languages have converged on the memory model we have been exploring,
1136-
\textsc{arm}v8 processors offer dedicated load-acquire and store-release instructions: \keyword{lda} and \keyword{stl}.
1135+
\textsc{Arm}v8 processors offer dedicated load-acquire and store-release instructions: \keyword{lda} and \keyword{stl}.
11371136
Hopefully, future \textsc{cpu} architectures will follow suit.
11381137
11391138
\section{Cache effects and false sharing}

0 commit comments

Comments
 (0)