Skip to content

Commit 812a58b

Browse files
author
Michael Wenk
authored
Merge pull request #15 from michaelwenk/filter-data-by-ppm
chore: enable stereo information in spectral knowlegde base & filter data by ppm
2 parents c166a3c + c3a10c9 commit 812a58b

File tree

15 files changed

+726
-518
lines changed

15 files changed

+726
-518
lines changed

README.md

-118
Original file line numberDiff line numberDiff line change
@@ -61,121 +61,3 @@ If the removal of the network created by docker-compose is desired, then use the
6161

6262
docker-compose -f docker-compose.yml -f docker-compose.publish.yml down
6363

64-
<!---
65-
## Self Compilation and Dependencies
66-
67-
### Compilation
68-
CASEkit (https://github.com/michaelwenk/casekit) has to be downloaded and compiled beforehand.
69-
70-
Now add the jar file to the local Maven repository by following command:
71-
72-
(note: replace "PATH/TO/CASEKIT-JAR-WITH-DEPENDENCIES" by the path to previously built CASEkit jar):
73-
74-
mvn install:install-file -Dfile=PATH/TO/CASEKIT-JAR-WITH-DEPENDENCIES -DgroupId=org.openscience -DartifactId=casekit -Dversion=1.0 -Dpackaging=jar
75-
76-
Clone this repository:
77-
78-
git clone https://github.com/michaelwenk/sherlock.git
79-
80-
Change the directory and build all the .jar files needed for this project using the build shell script:
81-
82-
cd sherlock
83-
sh buildJars.sh
84-
85-
### Dependencies
86-
Some services rely on specific software or file dependencies which has to be downloaded and put into certain places.
87-
#### PyLSD
88-
For the structure generation part PyLSD (http://eos.univ-reims.fr/LSD/JmnSoft/PyLSD/) is needed.
89-
PyLSD can be downloaded from http://eos.univ-reims.fr/LSD/JmnSoft/PyLSD/INSTALL.html.
90-
91-
Extract and rename the new PyLSD folder to "PyLSD", if needed.
92-
93-
Now put the PyLSD folder into
94-
95-
backend/sherlock-pylsd/data/lsd/
96-
97-
In case custom filters are desired to use one can create a folder "filters" in
98-
99-
backend/sherlock-pylsd/data/lsd/
100-
101-
and put the custom filters there. The system will use them automatically.
102-
103-
For more details about LSD and defining substructures and filters see http://eos.univ-reims.fr/LSD/MANUAL_ENG.html#SSTR .
104-
105-
#### NMRShiftDB
106-
For the dereplication, automatic hybridization detection und chemical shift prediction via HOSE codes the NMRShiftDB (https://nmrshiftdb.nmr.uni-koeln.de) is required.
107-
108-
Download the "nmrshiftdb2withsignals.sd" from https://sourceforge.net/projects/nmrshiftdb2/files/data/ and copy it into
109-
110-
backend/sherlock-db-service-dataset/data/nmrshiftdb/
111-
112-
and rename the file to "nmrshiftdb.sdf".
113-
114-
### Docker and Application Start/Stop
115-
This project uses Docker containers (https://www.docker.com) and starts them via docker-compose. Make sure that docker-compose is installed.
116-
117-
#### Build
118-
To build the container images use the following command:
119-
120-
docker-compose -f docker-compose.yml -f docker-compose.production.yml build
121-
122-
#### Start
123-
To start this application (in detached mode) use
124-
125-
docker-compose -f docker-compose.yml -f docker-compose.production.yml up -d
126-
127-
Note: It can take a few minutes until all services are available and registered.
128-
129-
#### Stop
130-
To stop this application use
131-
132-
docker-compose -f docker-compose.yml -f docker-compose.production.yml down
133-
134-
### Docker Container and Data Preparation/Persistence
135-
The databases for datasets and hybridizations have to be filled when starting the application the first time.
136-
137-
After that procedure, the container database contents are stored in the "data/db" subdirectory of each "db-instance" service.
138-
That enables the persistence of database content to access the data whenever the database services are restarting.
139-
140-
#### Dataset
141-
For dataset creation and insertion use:
142-
143-
curl -X POST -i 'http://localhost:8081/sherlock-db-service-dataset/replaceAll?nuclei=13C'
144-
145-
This will fill-in datasets with 13C spectra only. If multiple nuclei are desired,
146-
then this could be done by adding them separated by comma, e.g. 13C, 15N:
147-
148-
curl -X POST -i 'http://localhost:8081/sherlock-db-service-dataset/replaceAll?nuclei=13C,15N'
149-
150-
One can then check the number of datasets:
151-
152-
curl -X GET -i 'http://localhost:8081/sherlock-db-service-dataset/count'
153-
154-
#### Statistics
155-
As for datasets we need to build the hybridization and connectivity statistics and can decide which nuclei to consider:
156-
157-
curl -X POST -i 'http://localhost:8081/sherlock-db-service-statistics/hybridization/replaceAll?nuclei=13C'
158-
curl -X POST -i 'http://localhost:8081/sherlock-db-service-statistics/connectivity/replaceAll?nuclei=13C'
159-
160-
161-
To check the number of hybridization/connectivity entries:
162-
163-
curl -X GET -i 'http://localhost:8081/sherlock-db-service-statistics/hybridization/count'
164-
curl -X GET -i 'http://localhost:8081/sherlock-db-service-statistics/connectivity/count'
165-
166-
#### HOSE Codes
167-
One needs to insert the HOSE code information as well:
168-
169-
curl -X POST -i 'http://localhost:8081/sherlock-db-service-hosecode/replaceAll?nuclei=13C&maxSphere=6'
170-
171-
To check the number of HOSE code entries:
172-
173-
curl -X GET -i 'http://localhost:8081/sherlock-db-service-hosecode/count'
174-
175-
For spectra prediction a map of HOSE code and assigned statistics is needed.
176-
Due to this one now needs to execute following command to store such map in a shared volume:
177-
178-
curl -X GET -i 'http://localhost:8081/sherlock-db-service-hosecode/saveAllAsMap'
179-
180-
-->
181-

0 commit comments

Comments
 (0)