Pre-Production Server Patch Maintenance 04/09/2017 [Completed]

[Update 04/09/2017 1:05 PM] Our planned maintenance has been completed.

[Original] Pre-Production Server Patch Maintenance is scheduled for Sunday, 04/09/2017, from 8 am. to 5 pm. This should NOT affect any production services such as email, individual and departmental file shares, database applications, or web services (Moodle, vDesk, etc.). If an issue is encountered, please contact the Technology Service Desk at 253.879.8585 or servicedesk@pugetsound.edu.

Technology Services schedules regular maintenance windows, generally on the second Sunday of every month, to ensure a secure, reliable computing environment. Thank you for your patience as we conduct this important work!

[Complete] Pre-Production Maintenance Window 4/15/10, 7:00 AM to 9:00 AM

Update at 8:24am:  all maintenance work is complete without incident.

On Thursday April 15th from 7am-9am all pre-production servers are to be patched.  This means that all test systems may be intermittently or entirely unavailable during this time.  Most users will not be impacted, as most users do not have access to preproduction servers.  Specifically, the following servers will be impacted:

  • lummi
  • pilchuck
  • mysqldev
  • EXDEV
  • FIDALGO
  • GALAXY
  • KETRON
  • PORTAL
  • VS0

[Complete] Pre-Production Maintenance Window 2/18/10, 7:00 AM to 9:00 AM

Update at 2/18/2010 at 9:45am:  All services are back online.  Pilchuck and Lummi took a little longer than expected, but all other servers were up by 9am.

On Thursday February 18th from 7am-9am all pre-production servers are to be patched.  This means that all test systems may be intermittently or entirely unavailable during this time.  Most users will not be impacted, as most users do not have access to preproduction servers.  Specifically, the following servers will be impacted:

  • vmhost2
  • lummi
  • pilchuck
  • moodle2
  • kickstart
  • hope
  • decatur
  • shaw
  • squaxin
  • mysqldev
  • EXDEV
  • FIDALGO
  • GALAXY
  • KETRON
  • PORTAL
  • VS0

[Complete] Pre-Production Maintenance Window 12/10/09, 7am-9am

All maintenance work was completed by 8:30 am this morning.  Please contact TechHelp (x8585) if you experience any problems as a result of this change.

On Thursday December 10th from 7am-9am all preproduction servers are to be patched.  This means that all test systems may be intermittently or entirely unavailable during this time.  Most users will not be impacted, as most users do not have access to preproduction servers.  Specifically, the following servers will be impacted:

  • lummi
  • pilchuck
  • moodle2
  • kickstart
  • sage
  • EXDEV
  • FIDALGO
  • GALAXY
  • KETRON
  • PORTAL
  • VS0

OID Test groups and group members recreated today

The Groups container was accidentally deleted in the OID test database today and had to be re-created based on priv data in the Summit database.
Here are the steps we had to do to recover everything:
1. Recreate the Groups container.
Export the Groups container definition (as an LDIF command) from the OID production instance, and import it into OID test. You’ll need an LDAP browser tool to do this, like JXplorer.
2. Recreate all the AD groups.
Set the status of all the pugetsound domain groups in the privilege table to PA and run
privcmd.resolve_pending_privdef on each one.
3. Recreate the members in the AD groups.
Set the status of all AD person_privilege records to PA and run privcmd.resolve_pending_privs.
4. Recreate the portal group container.
Export the portal.070109.134036.113589000 container definition (as an LDIF command) from the OID production instance, and import it into OID test. You’ll need an LDAP browser tool to do this, like JXplorer.
5. Recreate all the portal groups.
Set the status of all the portal groups in the privilege table to PA and run
privcmd.resolve_pending_privdef on each one.
6. Recreate the members in the portal groups.
Set the status of all portal person_privilege records to PA and run privcmd.resolve_pending_privs.
7. Address any unusual configuration issues.
ViewsFlash groups have a special setup, the administrator group is a member of the creator group, so that has to be done manually.

Successful test of changing CA on production integration server

3/19/2009 11:30am – 12:30pm
Pavan and Jeff disabled the sync profiles and stopped the integration and directory servers on whidbey to install the 1024-bit certificates from rapidssl. We started the servers and enabled the sync profiles, then verified that a change to displayname in OID synced to AD and a password change in AD synced to OID.

We then reversed the process to re-install the previously functioning verisign certificates, and verified that the sync was again working successfully.

MX record change

The MX record on DNS zones was changed to mx00.ups.edu and mx01.ups.edu in an effort to normalize the naming convention for our mail exchange servers. This change resulted in some mail delivery problems since not also external mail servers picked up the change in a timely manner. A workaround was implemented to allow mail delivery to continue. Mail messages sent between 10:00am and 11:45am (-8:00 PST) seem to have been effected.

Possible WebMail problem identified

In our efforts to identify the cause of recent problems with the WebMail server, we have been at a loss for information. We have tried to discover what has been causing the delays and unresponsiveness in WebMail as of late. We have looked at possible memory leaks in daemons, possible attacks, possible miss configurations. All of these have not lead to a clear answer.

It is believed at this point in time that if Ockham’s Razor holds true we may have found the source of the problem. It was discovered late yesterday that the available disk space of the WebMail server was extremely low. Since WebMail serves as an imap gateway temporarily caching and displaying mail messages via a http server, disk space for temporary files is necessary. This has been the best possible explanation for the problems we have seen thus far.

We have increased available disk space on the server. We have also contacted server individuals who reported problems to determine if the issue still persists.

Modification to CBord backup

This morning there were questions posed by the operators about the status of the ntbackup process on the C-BORD Odyssey server around the location of the backup log file.

During the course of my investigation, I discovered that the use of the %Odyssey% variable does not work in all situations. So I modified the tapebackup.cmd file to use the absolute path contained in the %Odyssey% variable.

Media Server testing

At 10:30 am load testing of the media server was performed. This testing involved an audio file that was over 19 minutes long (19:37) and 69 workstations with the real media client.

The clients were located in Wyatt 203, McIntyre 324, and basement of the library with a few clients in the I-Commons. The majority of these machines were connected to campus network via cabletron 6000 switches.

The test lasted for the length of the audio file. During that time, no adverse network activity was observed. The server was relatively uneffected as well. At most, 18% of the cpu was consumed for a brief period of time. No service related calls were logged by the Help Desk concerning network issues during the testing. It should be noted that this test was performed during Spring Break.