Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[ee4j-build] GlassFish CI setup in Eclipse environment - Need help from Eclipse Infra team(Bug 531385)

As we are discussing in Bug 531385, let me explain why we need a accessible NFS for GlassFish CI in Eclipse infrastructure.

Current Design of GlassFish CI - The upstream job does the build of GlassFish source code and produces binary and downstream consumable test sources. Then the upstream BLOCKs itself and spawns the downstream jobs. Then the downstream jobs does SCP to upstream and downloads the binary and test sources. After downloading the required files using SCP, downstream jobs runs different test ids for  GlassFish concurrently. Once the tests are complete the downstream jobs do SCP to upstream to upload the test results. Once all the downstream jobs are complete, the upstream job UNBLOCKs itself, aggregate and publishes the test results.

Possible Design Choices - The followings are some of the design choices and their feasibility in the current scenario

a) Jenkins copy artifact plugin : Jenkins copy artifact plugin can copy the artifacts only when the job is complete. As per the current design, the upstream job waits for all the downstream jobs to complete in order to aggregate the results. So we can't use Jenkins copy artifact plugin in the current scenario.

b) Jenkins pipeline plugin : jenkins pipeline plugin might be the most suitable candidate for this kind of use case. However, currently we are not using this plugin and if we want to use this plugin we have to completely re design and re implement GlassFish CI infrastructure. So in that case it would not be a migration effort for the current GlassFish CI, rather a reimplementation effort(effort would be much higher than migration effort)

c) Use SCP : This is the current design. This worked inside the Oracle infrastructure as all the Jenkins build agents had password less access setup(share a common ssh key located in a NFS) for a common CI user. Also we tweaked the /etc/shadow and /etc/passwd files of the docker container to create the same user inside container(to make it passwordless SCP from the container). One drawback of this approach is it's user dependent and not truly portable design. Other drawback of this design is when multiple nodes are doing SCP concurrently, depending on the ssh config, the SCP might fail in high level of concurrency.

d) Use NFS to store files required by upstream and downstream: Instead of doing the SCP, the upstream can put the files required by downstream in a temp NFS and downstream can download the files from that location. Similarly once the downstream testing is complete, down-streams can put test results in the same NFS such that upstream can download the files from the NFS location. At the end of the execution upstream can cleanup the location.

In the current scenario, IMO option (d) is the cleanest design(we had a technical debt to change the existing design from option(c) to option(d)). As per the discussion in Bug 531385, dash-node01 and dash-node02 are both hosted at GCP and and such cannot access the NFS space.

1. Is it possible to create a NFS space in GCP such that dash-node01 and dash-node02 can access it?

2. If not, GlassFish might need dedicated build agents that have access to some NFS.(irrespective of the option(c) or option(d), GlassFish CI needs NFS space)

Migration of GlassFish CI is completely blocked for this issue. Please let me know your thoughts.

-- 
Thanks,
Arindam

Back to the top