I just spun up an AF cluster and it worked a treat! First time, with no problems - amazing I updated my docs accordingly.
I've also been working on the S3 reference genomes recently - Amazon gave us a grant to pay for the hosting, so it should be around for a year now. I'm now trying to make this into an easy-to-use resource with accompanying code. For example, I'd like to make it easier to fetch genomes by providing a script to run interactive command-line tools to fetch references. I just made a start here, but it doesn't work yet: https://github.com/ewels/AWS-iGenomes - when this is done I think it'll be super cool and really helpful for this setup.
The detection routine requires a compute node to be running when the Cluster Flow module is loaded for the first time.
Yup, I came across this. In the docs I added an instruction to fire off an empty job if no nodes are running so that some are created before loading CF. It's a bit of a pain as you have to wait a few minutes, but it seems to work ok. I take it that there's no way to pass on information about the Cluster config to the environment?
The customization feature is currently only available in
eu-west-1 (Ireland) as it's using yet-to-be-released Gridware binaries.
This is fine for now - that's also where I run everything and where the above AWS-iGenomes S3 bucket is. So no rush from my end here.
-l h_vmem SGE problem
Coming back to this - the default environment from my old cluster was
orte. Is this different to
smp? In other words, should I always do this division for SGE jobs, or is it only with specific setups? I guess I can either hardcode this behaviour or set it to only happen when a config option is specified. Apologies, I'm a complete novice when it comes to this stuff.
Thanks again for your work on this - I'm amazed at how easy it is to run now! Once the dust has settled a little more I will polish the docs and record a screencast for the clusterflow website I think.