Skip to content

Fixing a bit of bad programming used for the first iteration of the feature. #39

Merged
jswager merged 5 commits intojenkinsci:masterfrom
kevin-j-smith:master
Dec 30, 2015
Merged

Fixing a bit of bad programming used for the first iteration of the feature. #39
jswager merged 5 commits intojenkinsci:masterfrom
kevin-j-smith:master

Conversation

@kevin-j-smith
Copy link
Copy Markdown
Contributor

The original implementation was based upon the EC2 cloud plugin, which expects to reuse the provisioned machines and those provisioned machines were to stay around for a good amount of time. Also, their implementation had a bit of smelly code which I propergated to my initial implementation of provisionable templates. This check in fixes a good amount of those issues.

ghost1874 added 5 commits December 14, 2015 12:48
…h expects to reuse the provisioned machines and those provisioned machines were to stay around for a good amount of time. Also, their implementation had a bit of smelly code which I propergated to my initial implementation of provisionable templates. This check in fixes a good amount of those issues.
…re were many Hudson.getInstance calls but that was deprecated so I changed to Jenkins.getInstance. Also with this "house cleaning" I fixed many un-used imports and non-checks for nulls. 2) The new code for vsphere template slaves was messy and smelly. Therefore, I fixed it up by removing the extra computer class and sub-classing the slave class. Now the code looks a lot more clean and professional. Also, adding the benefit of being easier to manage. The second major theme was to fix the vsphere template slave in order for it to work with the workflow job type. First the slave would not be provisioned (that was fixed in a earlier commit) and second the slave would not be un-provisioned. Therefore, I referenced the methods used by Docker to handle slaves. This method more clearly uses the Jenkins' Cloud API to a higher degree. You will see that with the RunOnceCloudRetentionStrategy. With that being said (and implemented) I can add the list of retention strategies to the template so that we can have more lasting provisioned clones.
jswager added a commit that referenced this pull request Dec 30, 2015
Fixing a bit of bad programming used for the first iteration of the feature.
@jswager jswager merged commit eec6021 into jenkinsci:master Dec 30, 2015
@jmellor
Copy link
Copy Markdown

jmellor commented Jan 19, 2016

Fix for this merge urgently requested.

jmellor asked:
{quote}
I see lots of bad things in this version, starting with the new breakage that blindly copies the “Virtual Machine Name” from the “Name” field in the config. Saving the config always overwrites to the “Name” field, which is documented as not going to happen. This broke my setups for a long time until I figured out what was broken. I’m also seeing a fatal error being logged every minute when tracking the nodes. This is causing the host machine label to not be displayed or processed correctly when Jenkins is making decisions about which node to fire up to service the queue entry. Cloning a job also causes the field in the config.xml for the new job to be lost, also causing some nasty surprises.

I’m on the latest Jenkins (1.644) and latest plugin (2.9). Reverting to older Jenkins version as far back as 1.624 seems to have no effect. Reverting back to plugin version 2.8 also seems to have no effect on these critical failures.

Right now, I’m editing several hundred config.xml files by hand, as the plugin is messing up all jobs pretty badly, whether or not they reference slaves that are controlled using the plugin. I see some comments online about problems with robustness of the Jenkins core API, which may also be contributing to this debacle. JENKINS-32098 also seems to be closely related, but does not describe all the observed damage. Should I wait for the fixed version, or back out to which old version to get to a sane state again?
{quote}
and jswager replied:
{quote}
I think this is the price we paid for accepting contributions into the project. I'm not sure which PR caused the failure, but I suspect this one: #39

Recommendation is to contact the author of that PR. I'm not actively testing PRs as they come through.
{quote}

@kevin-j-smith
Copy link
Copy Markdown
Contributor Author

Hello, can you please explain your steps further? Thanks.

@jmellor
Copy link
Copy Markdown

jmellor commented Jan 21, 2016

Ghost1874 asked:

Hello, can you please explain your steps further? Thanks.

I have a very mixed set of slave machines. All run various flavours of
Ubuntu, and some are always up, some are shutdown and reverted to a
snapshot after every build, some build in Docker containers and some run
jobs in pbuilder containers. The problems all seem to stem from the common
scenario where any of the slave machines that are managed by the vSphere
plugin are offline, to be started up when it is selected - my preferred
environment at this point because it reduces the load on the ESXi hosts
until required. This used to work quite well, but now causes massive
issues throughout Jenkins.
For instance, if any slave is offline, then there is a once-per-minute
traceback in the Jenkins logs, and the label matching code in Jenkins
scheduling breaks. The most obvious aspect of this is that you can no
longer view the configuration section where you select the labels to
restrict which machine to run the job on. This happens for all jobs, not
just the ones that would select a host being controlled by the vmware
plugin.
Another side-effect of this same error is that if I copy a job, the copied
job no longer has the label restriction, and will run on any available
slave machine instead of a suitably-configured slave, and therefore almost
certainly breaks the build.
The code that brings the slave online is also breaking, and jobs just sit
in the queue.
Its a massive mess. How do I dig my way out of these new issues?

I am using a somewhat undesirable workaround at this point - keeping all
slaves online all the time. This is not very desirable since it
considerably increases the memory load on the already-overloaded production
ESXi host machines.

On Wed, Jan 20, 2016 at 9:53 AM, ghost1874 notifications@github.com wrote:

Hello, can you please explain your steps further? Thanks.


Reply to this email directly or view it on GitHub
#39 (comment)
.

@kevin-j-smith
Copy link
Copy Markdown
Contributor Author

Hello jmellor,

I have fixed the code that was broken due to the vspher-cloud-plugin not working with static slaves.   These slaves are the ones someone defines in Jenkins by "Add Node".   As you pointed out, I was not saving the vmName correctly and thus any job that was submitted and queued would not run due to the plugin not being able to start the vm image.  This should no longer cause you issues.  I was able to test this out in my vm complex and your scenario seems to be working fine.   

The issue you are seeing where the label is missing when you do a copy, I am not seeing this with the Jenkins' version I am using.  Being that I am in a development environment, just for the vpshere-cloud-plugin I am using Jenkins 1.609.2.  

As per the changes/enhancements that I contributed to this plugin are the capabilities to provision new machines on the fly.   When I job, of a particular label, is queued and a machine for that label is not available then the plugin asks vsphere to clone a new machine for that job. Currently, this new slave is under a RunOnce retention strategy.   Therefore, when my jobs are not running then the vm complex is completely cleaned and, thus, my esxi machines have no extra load on them.  If you review the wiki, I have added information about how to set this up.  This is very helpful to me because I run a few thousand tests against a build, in a down stream job, which would only be able to run on static slaves.  Slave, as you know, would have already needed to be defined in vsphere and Jenkins, both.   With these new enhancements I can, which I use the workflow plugin to define a loop which creates nodes and defines the procedures to run on the node, parse our the test work to a big set of vm machines that are provisioned on-the-fly by Jenkins and the vsphere-cloud-plugin.  Once the tests finish then all of the machines are returned and vsphere is again all "cleaned-up" of test machines.  Not only are my tests ran faster, overall, but my vsphere resource pool is freed up.  

Regards,
Kevin J. Smith

@jmellor
Copy link
Copy Markdown

jmellor commented Jan 26, 2016

Thanks! Where do I get the updated plugin? It does not seem to be on the
plugin site. Does it need to be merged by Jason Swager first?

On Mon, Jan 25, 2016 at 10:52 AM, ghost1874 notifications@github.com
wrote:

Hello jmellor,

I have fixed the code that was broken due to the vspher-cloud-plugin not working with static slaves. These slaves are the ones someone defines in Jenkins by "Add Node". As you pointed out, I was not saving the vmName correctly and thus any job that was submitted and queued would not run due to the plugin not being able to start the vm image. This should no longer cause you issues. I was able to test this out in my vm complex and your scenario seems to be working fine.

The issue you are seeing where the label is missing when you do a copy, I am not seeing this with the Jenkins' version I am using. Being that I am in a development environment, just for the vpshere-cloud-plugin I am using Jenkins 1.609.2.

As per the changes/enhancements that I contributed to this plugin are the capabilities to provision new machines on the fly. When I job, of a particular label, is queued and a machine for that label is not available then the plugin asks vsphere to clone a new machine for that job. Currently, this new slave is under a RunOnce retention strategy. Therefore, when my jobs are not running then the vm complex is completely cleaned and, thus, my esxi machines have no extra load on them. If you review the wiki, I have added information about how to set this up. This is very helpful to me because I run a few thousand tests against a build, in a down stream job, which would only be able to run on static slaves. Slave, as you know, would have already needed to be defined in vsphere and Jenkins, both. With these new enhancements I can, which I use the workflow plugin to define a loop which creates nodes and defines the procedures to run on the node, parse our the test work to a big set
of vm machines that are provisioned on-the-fly by Jenkins and the vsphere-cloud-plugin. Once the tests finish then all of the machines are returned and vsphere is again all "cleaned-up" of test machines. Not only are my tests ran faster, overall, but my vsphere resource pool is freed up.

Regards,
Kevin J. Smith


Reply to this email directly or view it on GitHub
#39 (comment)
.

@kevin-j-smith
Copy link
Copy Markdown
Contributor Author

I have messaged Jason to get this merged. He normally replies in a few days. If I see a message about it being merged, then I will let you know.

@jmellor
Copy link
Copy Markdown

jmellor commented Jan 26, 2016

Kevin said:

When I job, of a particular label, is queued and a machine for that label
is not available then the plugin asks vsphere to clone a new machine for
that job.

This part of your message is intriguing, and may be something that I can
use instead of static slaves. How do you configure a slave to fire up a
template based on a label when a machine with a matching label is not
available? How does it determine which template to use? Can you walk me
through such a config?

On Mon, Jan 25, 2016 at 10:52 AM, ghost1874 notifications@github.com
wrote:

Hello jmellor,

I have fixed the code that was broken due to the vspher-cloud-plugin not working with static slaves. These slaves are the ones someone defines in Jenkins by "Add Node". As you pointed out, I was not saving the vmName correctly and thus any job that was submitted and queued would not run due to the plugin not being able to start the vm image. This should no longer cause you issues. I was able to test this out in my vm complex and your scenario seems to be working fine.

The issue you are seeing where the label is missing when you do a copy, I am not seeing this with the Jenkins' version I am using. Being that I am in a development environment, just for the vpshere-cloud-plugin I am using Jenkins 1.609.2.

As per the changes/enhancements that I contributed to this plugin are the capabilities to provision new machines on the fly. When I job, of a particular label, is queued and a machine for that label is not available then the plugin asks vsphere to clone a new machine for that job. Currently, this new slave is under a RunOnce retention strategy. Therefore, when my jobs are not running then the vm complex is completely cleaned and, thus, my esxi machines have no extra load on them. If you review the wiki, I have added information about how to set this up. This is very helpful to me because I run a few thousand tests against a build, in a down stream job, which would only be able to run on static slaves. Slave, as you know, would have already needed to be defined in vsphere and Jenkins, both. With these new enhancements I can, which I use the workflow plugin to define a loop which creates nodes and defines the procedures to run on the node, parse our the test work to a big set
of vm machines that are provisioned on-the-fly by Jenkins and the vsphere-cloud-plugin. Once the tests finish then all of the machines are returned and vsphere is again all "cleaned-up" of test machines. Not only are my tests ran faster, overall, but my vsphere resource pool is freed up.

Regards,
Kevin J. Smith


Reply to this email directly or view it on GitHub
#39 (comment)
.

@jswager
Copy link
Copy Markdown
Member

jswager commented Jan 26, 2016

Just released the new version. It should be available in a bit via the
Jenkins website.

On Tue, Jan 26, 2016 at 6:04 AM, John Mellor notifications@github.com
wrote:

Thanks! Where do I get the updated plugin? It does not seem to be on the
plugin site. Does it need to be merged by Jason Swager first?

On Mon, Jan 25, 2016 at 10:52 AM, ghost1874 notifications@github.com
wrote:

Hello jmellor,

I have fixed the code that was broken due to the vspher-cloud-plugin not
working with static slaves. These slaves are the ones someone defines in
Jenkins by "Add Node". As you pointed out, I was not saving the vmName
correctly and thus any job that was submitted and queued would not run due
to the plugin not being able to start the vm image. This should no longer
cause you issues. I was able to test this out in my vm complex and your
scenario seems to be working fine.

The issue you are seeing where the label is missing when you do a copy,
I am not seeing this with the Jenkins' version I am using. Being that I am
in a development environment, just for the vpshere-cloud-plugin I am using
Jenkins 1.609.2.

As per the changes/enhancements that I contributed to this plugin are
the capabilities to provision new machines on the fly. When I job, of a
particular label, is queued and a machine for that label is not available
then the plugin asks vsphere to clone a new machine for that job.
Currently, this new slave is under a RunOnce retention strategy. Therefore,
when my jobs are not running then the vm complex is completely cleaned and,
thus, my esxi machines have no extra load on them. If you review the wiki,
I have added information about how to set this up. This is very helpful to
me because I run a few thousand tests against a build, in a down stream
job, which would only be able to run on static slaves. Slave, as you know,
would have already needed to be defined in vsphere and Jenkins, both. With
these new enhancements I can, which I use the workflow plugin to define a
loop which creates nodes and defines the procedures to run on the node,
parse our the test work to a big set
of vm machines that are provisioned on-the-fly by Jenkins and the
vsphere-cloud-plugin. Once the tests finish then all of the machines are
returned and vsphere is again all "cleaned-up" of test machines. Not only
are my tests ran faster, overall, but my vsphere resource pool is freed up.

Regards,
Kevin J. Smith


Reply to this email directly or view it on GitHub
<
#39 (comment)

.


Reply to this email directly or view it on GitHub
#39 (comment)
.

@kevin-j-smith
Copy link
Copy Markdown
Contributor Author

hello jmellor,

I wrote up how to setup Jenkins to provision slaves on-demand on the vsphere-cloud-plugin wiki. I added the link below. Basically in the manage jenkins -> configure system's cloud section, where you have already defined a vsphere cloud. You should now have a section with Slave Templates and a add button next to it. You would add a new template and put in the information as you would define a new clone during a build phase and some information like a static vsphere slave. Then using the Template's label in the job Jenkins will the provision machines based upon that template. Currently the retention strategy is run once so the provisioned clone, after the run, will be un-provisioned, automatically.

Thoughtfully,
Kevin J. Smith

https://wiki.jenkins-ci.org/display/JENKINS/vSphere+Cloud+Plugin

@kimsDK
Copy link
Copy Markdown

kimsDK commented Feb 23, 2016

Hi Kevin
tried to setup on-demand (windows) slave but get an exception after ssh is connected. Could you walk me through how to use windows based on-demand slaves?
BR Kim S.

@kevin-j-smith
Copy link
Copy Markdown
Contributor Author

Hello Kim, I have yet to start working with Window's slaves. I was just informed, by my system operation's team, that a Windows Master has been created. Hopefully soon, with-in the next month or two, I will have this working on Windows as well. At that time I will reply with my exact steps.

Thanks for the interest,
Kevin J. Smith

@kimsDK
Copy link
Copy Markdown

kimsDK commented Feb 24, 2016

Thx for your reply Kevin. Good to hear it on it’s way ☺

For now I will try to make a workaround.

My requirement is to spawn sets of VM’s one windows based and one linux based (in a client server setup).

My Win 7 VM template has one NIC with DHCP, and when cloned it’l have a “run once script” that do:

  •      Use PowerCLI (from guest OS) to find vm name on esxi host, by compare IP adr.
    
  •      Rename the guest hostname to reflect the VM name.
    
  •      Do something to update DNS like maybe Renew DHCP (hope this will update the DNS).
    
  •      Find the number of running sets of cloned VM’s (from the VM name prefix).
    
  •      Add network adapter via PowerCLI connected to “virtual lan(X)” (from the guest OS. ‘X’ is the number of running sets + 1).
    
  •      Wait for the adapter to show in the guest.
    
  •      Configure new adapter with static IP (connection cln/srv over the virtual lan).
    
  •      Bring slave online running jnlp with the new hostname (expect this will work).
    
  •      Let Jenkins configure and orchestrate the gust OS.
    

Don’t know how vsphere-cloud plugin will react to rebooting before it is connected.
May bee some of my scenario can be used when considering how to implement windows guest support ☺

Great work by the way ☺
BR Kim

[capres2]
Kim Schmock | SW Developer | CAPRES A/S | M: +45 40266584 |P: +45 88821483 | www.capres.comhttp://www.capres.com/
Notice: This email and any files transmitted with it are Capres A/S confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email by mistake, please notify the sender immediately by email and delete this email from your system. Thank you.

From: ghost1874 [mailto:notifications@github.com]
Sent: 23. februar 2016 16:25
To: jenkinsci/vsphere-cloud-plugin vsphere-cloud-plugin@noreply.github.com
Cc: Kim J. Schmock KJS@capres.com
Subject: Re: [vsphere-cloud-plugin] Fixing a bit of bad programming used for the first iteration of the feature. (#39)

Hello Kim, I have yet to start working with Window's slaves. I was just informed, by my system operation's team, that a Windows Master has been created. Hopefully soon, with-in the next month or two, I will have this working on Windows as well. At that time I will reply with my exact steps.

Thanks for the interest,
Kevin J. Smith


Reply to this email directly or view it on GitHubhttps://github.com//pull/39#issuecomment-187740216.

@elordahl
Copy link
Copy Markdown
Contributor

This could be pretty straight-forward if we use a WinRM library like
https://github.com/xebialabs/overthere/ to configure the slave remotely
(similar to ssh for *nix). Many tools, like Chef and Ansible, rely on
WinRM for their Windows manipulation/management. I think the following
steps should be sufficient, granted we're depending on WinRM:

  1. Deploy VM from template (this is already configured(
  2. Create new slave configured specifically for JNLPLauncher, opposed to
    Launcher
  3. Obtain "secret" for JNLP slave via SlaveComputer::getJnlpMac()
  4. Use WinRM library to copy over slave.jar to new VM
  5. Use WinRM library to invoke jnlp command to start slave w/ name and
    secret
    <Voila! Slave is online and build runs>
  6. When complete, delete slave from master and destroy VM as usual

Hopefully this wont be easier-said-than-done, but I'll see if i can get a
PR submitted in the next few days. Ephemeral windows nodes will be
especially useful to have!

On another note, if you're interested, the workaround i've been using for
3-4 years now (sadly) involves the vsphere-build steps. While it is
sufficient, I'd like to see this process function more natively moving
forward.

The workaround is to configure build steps that will deploy and power on a
VM, run shell commands (net use, xcopy, schtasks) to copy the slave jar to
the new VM and schedule a task to start the jar remotely, and then finally
trigger ANOTHER build on the newly started slave (which uses the swarm
plugin). Once the job is completed, the delete VM build step will be
invoked, completing the cycle. it does the job but you're left with jobs
within jobs, which can be a bit overwhelming.

Thanks,
Eric

On Tue, Feb 23, 2016 at 11:50 PM, kimschmock notifications@github.com
wrote:

Thx for your reply Kevin. Good to hear it on it’s way ☺

For now I will try to make a workaround.

My requirement is to spawn sets of VM’s one windows based and one linux
based (in a client server setup).

My Win 7 VM template has one NIC with DHCP, and when cloned it’l have a
“run once script” that do:

  • Use PowerCLI (from guest OS) to find vm name on esxi host, by compare IP
    adr.
  • Rename the guest hostname to reflect the VM name.
  • Do something to update DNS like maybe Renew DHCP (hope this will update
    the DNS).
  • Find the number of running sets of cloned VM’s (from the VM name
    prefix).
  • Add network adapter via PowerCLI connected to “virtual lan(X)” (from the
    guest OS. ‘X’ is the number of running sets + 1).
  • Wait for the adapter to show in the guest.
  • Configure new adapter with static IP (connection cln/srv over the
    virtual lan).
  • Bring slave online running jnlp with the new hostname (expect this will
    work).
  • Let Jenkins configure and orchestrate the gust OS.

Don’t know how vsphere-cloud plugin will react to rebooting before it is
connected.
May bee some of my scenario can be used when considering how to implement
windows guest support ☺

Great work by the way ☺
BR Kim

[capres2]
Kim Schmock | SW Developer | CAPRES A/S | M: +45 40266584 |P: +45 88821483
| www.capres.comhttp://www.capres.com/
Notice: This email and any files transmitted with it are Capres A/S
confidential and intended solely for the use of the individual or entity to
whom they are addressed. If you have received this email by mistake, please
notify the sender immediately by email and delete this email from your
system. Thank you.

From: ghost1874 [mailto:notifications@github.com]
Sent: 23. februar 2016 16:25
To: jenkinsci/vsphere-cloud-plugin <
vsphere-cloud-plugin@noreply.github.com>
Cc: Kim J. Schmock KJS@capres.com
Subject: Re: [vsphere-cloud-plugin] Fixing a bit of bad programming used
for the first iteration of the feature. (#39)

Hello Kim, I have yet to start working with Window's slaves. I was just
informed, by my system operation's team, that a Windows Master has been
created. Hopefully soon, with-in the next month or two, I will have this
working on Windows as well. At that time I will reply with my exact steps.

Thanks for the interest,
Kevin J. Smith


Reply to this email directly or view it on GitHub<
https://github.com/jenkinsci/vsphere-cloud-plugin/pull/39#issuecomment-187740216>.


Reply to this email directly or view it on GitHub
#39 (comment)
.

@kimsDK
Copy link
Copy Markdown

kimsDK commented Feb 24, 2016

Hi Eric
What you explain is exactly what I kind of hoped for.
I prepare ISO image with Microsoft DISM copying scripts in for enable WinRM and such from a Powershell script.
I am using packer to create my template from ISO (using WinRM).
I am using Ansible for orchestration (using WinRM).
It would kind of complete the circle if vsphere-cloud-plugin could use WinRM too ☺
Would also be greate if the “build” could survive a slave reboot, and reconnect via WinRM when it get up again, and then end connect via jnlp.
When using cloned VM’s guest hostname must be changed, and windows services are not all happy of changing hostname without reboot.
Or maybe there is another solution ?
BR Kim
[capres2]
Kim Schmock | SW Developer | CAPRES A/S | M: +45 40266584 |P: +45 88821483 | www.capres.comhttp://www.capres.com/
Notice: This email and any files transmitted with it are Capres A/S confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email by mistake, please notify the sender immediately by email and delete this email from your system. Thank you.

From: Eric Lordahl [mailto:notifications@github.com]
Sent: 24. februar 2016 11:06
To: jenkinsci/vsphere-cloud-plugin vsphere-cloud-plugin@noreply.github.com
Cc: Kim J. Schmock KJS@capres.com
Subject: Re: [vsphere-cloud-plugin] Fixing a bit of bad programming used for the first iteration of the feature. (#39)

This could be pretty straight-forward if we use a WinRM library like
https://github.com/xebialabs/overthere/ to configure the slave remotely
(similar to ssh for *nix). Many tools, like Chef and Ansible, rely on
WinRM for their Windows manipulation/management. I think the following
steps should be sufficient, granted we're depending on WinRM:

  1. Deploy VM from template (this is already configured(
  2. Create new slave configured specifically for JNLPLauncher, opposed to
    Launcher
  3. Obtain "secret" for JNLP slave via SlaveComputer::getJnlpMac()
  4. Use WinRM library to copy over slave.jar to new VM
  5. Use WinRM library to invoke jnlp command to start slave w/ name and
    secret
    <Voila! Slave is online and build runs>
  6. When complete, delete slave from master and destroy VM as usual

Hopefully this wont be easier-said-than-done, but I'll see if i can get a
PR submitted in the next few days. Ephemeral windows nodes will be
especially useful to have!

On another note, if you're interested, the workaround i've been using for
3-4 years now (sadly) involves the vsphere-build steps. While it is
sufficient, I'd like to see this process function more natively moving
forward.

The workaround is to configure build steps that will deploy and power on a
VM, run shell commands (net use, xcopy, schtasks) to copy the slave jar to
the new VM and schedule a task to start the jar remotely, and then finally
trigger ANOTHER build on the newly started slave (which uses the swarm
plugin). Once the job is completed, the delete VM build step will be
invoked, completing the cycle. it does the job but you're left with jobs
within jobs, which can be a bit overwhelming.

Thanks,
Eric

On Tue, Feb 23, 2016 at 11:50 PM, kimschmock <notifications@github.1485827954.workers.devmailto:notifications@github.com>
wrote:

Thx for your reply Kevin. Good to hear it on it’s way ☺

For now I will try to make a workaround.

My requirement is to spawn sets of VM’s one windows based and one linux
based (in a client server setup).

My Win 7 VM template has one NIC with DHCP, and when cloned it’l have a
“run once script” that do:

  • Use PowerCLI (from guest OS) to find vm name on esxi host, by compare IP
    adr.
  • Rename the guest hostname to reflect the VM name.
  • Do something to update DNS like maybe Renew DHCP (hope this will update
    the DNS).
  • Find the number of running sets of cloned VM’s (from the VM name
    prefix).
  • Add network adapter via PowerCLI connected to “virtual lan(X)” (from the
    guest OS. ‘X’ is the number of running sets + 1).
  • Wait for the adapter to show in the guest.
  • Configure new adapter with static IP (connection cln/srv over the
    virtual lan).
  • Bring slave online running jnlp with the new hostname (expect this will
    work).
  • Let Jenkins configure and orchestrate the gust OS.

Don’t know how vsphere-cloud plugin will react to rebooting before it is
connected.
May bee some of my scenario can be used when considering how to implement
windows guest support ☺

Great work by the way ☺
BR Kim

[capres2]
Kim Schmock | SW Developer | CAPRES A/S | M: +45 40266584 |P: +45 88821483
| www.capres.comhttp://www.capres.com/http://www.capres.com%3chttp:/www.capres.com/
Notice: This email and any files transmitted with it are Capres A/S
confidential and intended solely for the use of the individual or entity to
whom they are addressed. If you have received this email by mistake, please
notify the sender immediately by email and delete this email from your
system. Thank you.

From: ghost1874 [mailto:notifications@github.com]
Sent: 23. februar 2016 16:25
To: jenkinsci/vsphere-cloud-plugin <
vsphere-cloud-plugin@noreply.github.commailto:vsphere-cloud-plugin@noreply.github.com>
Cc: Kim J. Schmock <KJS@capres.commailto:KJS@capres.com>
Subject: Re: [vsphere-cloud-plugin] Fixing a bit of bad programming used
for the first iteration of the feature. (#39)

Hello Kim, I have yet to start working with Window's slaves. I was just
informed, by my system operation's team, that a Windows Master has been
created. Hopefully soon, with-in the next month or two, I will have this
working on Windows as well. At that time I will reply with my exact steps.

Thanks for the interest,
Kevin J. Smith


Reply to this email directly or view it on GitHub<
https://github.com/jenkinsci/vsphere-cloud-plugin/pull/39#issuecomment-187740216>.


Reply to this email directly or view it on GitHub
#39 (comment)
.


Reply to this email directly or view it on GitHubhttps://github.com//pull/39#issuecomment-188173462.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants