Skip to content

Commit 5077478

Browse files
Merge branch 'main' of github.com:rangsimanketkaew/tensorflow_cpp_api into main
2 parents c4f3533 + 1e97e3b commit 5077478

4 files changed

+85
-16
lines changed

README.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -5,16 +5,16 @@ https://www.tensorflow.org/api_docs/cc
55

66
## What is it exactly?
77

8-
The functionality of the TensorFlow (TF) is implemented in many languages and one of them is C++,
8+
The functionality of TensorFlow (TF) is implemented in many languages and one of them is C++,
99
but Python is used as an interface (front-end) between the backend system and the users.
10-
The TF C++ API his built on TensorFlow Session (in the old version) and on TensorFlow ClientSession (in the new version).
11-
One can make use either of these to execute TensorFlow graphs that have been built using the Python API and serialized to a `GraphDef`.
10+
The TF C++ API is built on TensorFlow Session (in the old version) and TensorFlow ClientSession (in the new version).
11+
One can make use of either of these to execute TensorFlow graphs that have been built using the Python API and serialized to a `GraphDef`.
1212

1313
## C++ API for TensorFlow
1414

1515
The runtime of TensorFlow is written in C++, and mostly, C++ is connected to TensorFlow through header files in `tensorflow/cc`.
16-
The C++ API is still in experimental stages of development, and also the documentation is being updated,
17-
meaning that it lacks of information and tutorial about how to use TensorFlow API.
16+
The C++ API is still in the experimental stages of development, and also the documentation is being updated,
17+
meaning that it lacks information and a tutorial about how to use TensorFlow API.
1818

1919
## Content
2020

compile_tensorflow_cpp.md

+17-6
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,14 @@ chmod +x bazel-3.7.2-installer-linux-x86_64.sh
9191

9292
### 1. Compile TensorFlow C++ shared library (with optimization)
9393

94-
Download or clone github repo to your system:
94+
Download a tarball of TensorFlow (I strongly prefer v.2.7 to other versions),
95+
e.g., from https://github.com/tensorflow/tensorflow/releases/tag/v2.7.4
96+
```bash
97+
wget https://github.com/tensorflow/tensorflow/archive/refs/tags/v2.7.4.tar.gz
98+
tar -xzvf v2.7.4.tar.gz
99+
cd tensorflow-2.7.4
100+
```
101+
or clone github repo to your system:
95102
```bash
96103
git clone https://github.com/tensorflow/tensorflow
97104
cd tensorflow
@@ -165,11 +172,15 @@ Note:
165172
1. Building TensorFlow can consume a lot of memory. So I prefer a small number of CPUs (`--jobs`), e.g. 4 CPUs use `--jobs=4`.
166173
2. Limit RAM requested by bazel with `--local_ram_resources`. The value is either integer, .e.g., 2048 use `--local_ram_resources=2048` or % of total memory, e.g., 50% use `"HOST_RAM*.50"`.
167174
3. The whole process can take up to 1 hour.
168-
4. Add `-D_GLIBCXX_USE_CXX11_ABI=0` if you use GCC 5 or higher version.
169-
5. Flags for optimization: `--copt="-O3"`.
170-
6. Flasg for both AMD and Intel chips: `--copt=-mfma --copt=-msse4.1 --copt=-msse4.2 --copt=-mfpmath=both`.
171-
7. Flags for Intel: `--copt=-mavx --copt=-mavx2`.
172-
8. Rebuild with `--config=monolithic` if you want to compile all TensorFlow C++ code into a single shared object.
175+
4. If you don't want Bazel creates cache files in your local space, add [`--output_user_root`](https://docs.bazel.build/versions/main/user-manual.html#flag--output_user_root) to change the directory where output and base files will be created, e.g.,
176+
```bash
177+
bazel --output_user_root=/scratch/bazel/ build ...
178+
```
179+
5. Add `-D_GLIBCXX_USE_CXX11_ABI=0` if you use GCC 5 or higher version.
180+
6. Flags for optimization: `--copt="-O3"`.
181+
7. Flasg for both AMD and Intel chips: `--copt=-mfma --copt=-msse4.1 --copt=-msse4.2 --copt=-mfpmath=both`.
182+
8. Flags for Intel: `--copt=-mavx --copt=-mavx2`.
183+
9. Rebuild with `--config=monolithic` if you want to compile all TensorFlow C++ code into a single shared object.
173184

174185
**Optional 1:** Test
175186
```bash

load_model_tensorflow_cpp.md

+14
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,20 @@ Status status = LoadSavedModel(session_options, run_options, export_dir, {kSaved
6262
if (!status.ok()) {
6363
std::cerr << "Failed: " << status;
6464
}
65+
66+
auto sig_map = model_bundle.GetSignatures();
67+
auto model_def = sig_map.at("serving_default");
68+
69+
for (auto const& p : sig_map) {
70+
std::cout << "key: " << p.first.c_str() << std::endl;
71+
}
72+
for (auto const& p : model_def.inputs()) {
73+
std::cout << "key: " << p.first.c_str() << " " << p.second.name().c_str() << std::endl;
74+
}
75+
for (auto const& p : model_def.outputs()) {
76+
std::cout << "key: " << p.first.c_str() << " " << p.second.name().c_str() << std::endl;
77+
}
78+
6579
return 0;
6680
}
6781
```

tensorflow_serving.md

+49-5
Original file line numberDiff line numberDiff line change
@@ -3,19 +3,25 @@
33
## SavedModel
44

55
**Save your (keras/tf) model:**
6-
```
6+
```python
77
model.compile(...)
88
model.fit(...)
99
model.save("/path/to/your/model/folder/")
1010
```
1111

12-
**Get model input:**
12+
of you can use [`tf.saved_model.save`](https://www.tensorflow.org/api_docs/python/tf/saved_model/save) method to save object in SavedModel format at low-level:
13+
14+
```python
15+
tf.saved_model.save(model, export_dir)
1316
```
17+
18+
**Get model input:**
19+
```bash
1420
$ saved_model_cli show --tag_set serve --signature_def serving_default --dir model/
1521
```
1622

1723
output:
18-
```
24+
```bash
1925
The given SavedModel SignatureDef contains the following input(s):
2026
inputs['dense_input'] tensor_info:
2127
dtype: DT_FLOAT
@@ -29,12 +35,50 @@ The given SavedModel SignatureDef contains the following output(s):
2935
Method name is: tensorflow/serving/predict
3036
```
3137

32-
The name of input and output tensors are `serving_default_dense_input:0` and `StatefulPartitionedCall:0`, respectively
38+
The name of input and output tensors are `serving_default_dense_input` and `StatefulPartitionedCall`, respectively
39+
40+
You can also use `--all` argument to get all information, like this:
41+
42+
```bash
43+
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
44+
45+
signature_def['__saved_model_init_op']:
46+
The given SavedModel SignatureDef contains the following input(s):
47+
The given SavedModel SignatureDef contains the following output(s):
48+
outputs['__saved_model_init_op'] tensor_info:
49+
dtype: DT_INVALID
50+
shape: unknown_rank
51+
name: NoOp
52+
Method name is:
53+
54+
signature_def['serving_default']:
55+
The given SavedModel SignatureDef contains the following input(s):
56+
inputs['args_0'] tensor_info:
57+
dtype: DT_FLOAT
58+
shape: (-1, 394)
59+
name: serving_default_args_0:0
60+
inputs['args_0_1'] tensor_info:
61+
dtype: DT_FLOAT
62+
shape: (-1, 99)
63+
name: serving_default_args_0_1:0
64+
The given SavedModel SignatureDef contains the following output(s):
65+
outputs['out_1'] tensor_info:
66+
dtype: DT_FLOAT
67+
shape: (-1, 394)
68+
name: StatefulPartitionedCall:0
69+
outputs['out_2'] tensor_info:
70+
dtype: DT_FLOAT
71+
shape: (-1, 99)
72+
name: StatefulPartitionedCall:1
73+
Method name is: tensorflow/serving/predict
74+
75+
...
76+
```
3377

3478
## Run model
3579

3680
**Load model**
37-
```
81+
```cpp
3882
const std::string export_dir = "/path/to/your/model/folder/";
3983

4084
tensorflow::SavedModelBundle model_bundle;

0 commit comments

Comments
 (0)