3.6 years ago by
United States
Hi,
Bjoern is correct - memory for executing jobs is allocated per job (and per job type). The general rule is that if a job will run line-command, then it will run within Galaxy (when both are using the same resources and the server is in a production configuration).
Many tools run with 8G of memory, others at 16G, and a few select tools at a higher level. Much of what a server/cluster requires in terms of resources will depend on the analysis actually being done, but that said, 16G is about the minimum needed on a local if computations are beyond simple file manipulations and/or the inputs are large. Any 3rd party tools will have information about resource requirements - and many also have recommendations for managing memory usage through parameter tuning. As you know, it is certainly possible to create jobs that will never run, on any server, in Galaxy or not. If you have the chance, testing on or deciding to use a cloud Galaxy (where much more memory can be allocated) can be a useful choice. AWS offers generous grants for projects and Cloudman is super convenient to start up and use.
Hope this helps! Jen, Galaxy team