Skip to content

Commit 30fe313

Browse files
Update 05-Shared_computing_etiquette.Rmd
1 parent 5e9ac37 commit 30fe313

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

05-Shared_computing_etiquette.Rmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,7 @@ Often there is a default file size limit for jobs. For example, the JHPCE has a
153153

154154
In addition to the file size limit, you are often also given a default amount of RAM for your job as well. Again, you can typically run a job with more RAM if you specify. Similar to the file size limit, you will likely need to set the RAM that you will need for your job if it is above the default limit. This involves setting a lower and upper limit to the RAM that your job can use. If your job exceeds that amount of RAM it will be stopped. Typically people call stopping a job "killing" it. The lower and upper limit can be the same number.
155155

156-
How do you know how much RAM to assign to your job? Well, if you are performing a job with files that are two times the size of the file size default limit, then it might make sense to double the RAM you would typically use. It's also a good idea to test on one file first if you are going to perform the same job on multiple files. You can then assess how much RAM the job used. First, try to perform the job with lower limits, then progressively increase the size until you see that the job was successful and not killed for exceeding the limit. Keep in mind, however, how much RAM there is on each node. Remember, it is important not to ask for all the RAM on a single node or core on that node, as this will result in you hogging that node and other users will not be able to use RAM on that node or core on that node. Remember that you will likely have the option to use multiple cores. This can also help you to use less RAM across each core. For example, a job that needs 120GB of RAM could use 10 cores with 12 GB of RAM each.
156+
How do you know how much RAM to assign to your job? Well, if you are performing a job with files that are two times the size of the file size default limit, then it might make sense to double the RAM you would typically use. **It's also a good idea to test on one file first if you are going to perform the same job on multiple files.** You can then assess how much RAM the job used. First, try to perform the job with lower limits, then progressively increase the size until you see that the job was successful and not killed for exceeding the limit. Keep in mind, however, how much RAM there is on each node. Remember, it is important not to ask for all the RAM on a single node or core on that node, as this will result in you hogging that node and other users will not be able to use RAM on that node or core on that node. Remember that you will likely have the option to use multiple cores. This can also help you to use less RAM across each core. For example, a job that needs 120GB of RAM could use 10 cores with 12 GB of RAM each.
157157

158158
Often there will be a limit for the number of jobs, the amount of RAM, and the number of cores that a single user can use beyond the default limits. This is to ensure that a user doesn't use too many resources causing others to not be able to perform their jobs. Check to see what these limits are, and then figure out what the appropriate way is to contact to request for more. Again, communication standards and workflows may vary based on the resource.
159159

0 commit comments

Comments
 (0)