Internet Connection Instability II

The University’s internet connection was inoperable this morning from about 7 AM until about 10:30 AM. The problem was isolated to the PIX firewall. A number of logged outbound connection attemps (ca. 97000) from forged class A source addresses to a destination in Northern California were discovered. The sheer number of such attempts constituted a denial of service situation for the firewall.

The true source of the connection attempts was traced to a student computer in Todd/Phibbs. The students network port was inactivated, and the firewall was rebooted. This restored internet service.
Continue reading

Internet connection instability I

The University’s Internet connection was unstable from 7 PM to 9:30 PM. We are unsure of the reason. Checked for unusual traffic at the core router, and found none. The traffic analysis graph shows an unusually flat usage level at about 5 Mbps during this the period 7PM to 9PM, then usage dropped to almost 0, and is now recovering.

The internet service provider has been called.

Camano PM

The disk partitions on Camano were modified to isolate disk space for BJP, provide more swap space, and more space for Oracle. Work completed before the 9am end of PM.

Cascade web (Enterprise Manager) was restarted via command line, because the oracle account .profile file was incorrect.

BJP was started by command line, but issues with the startup procedure required a few extra steps to be taken. BJP was up and running as of 9am.

SYSLASER1 modification

The HP laserjet 9000 (syslaser1) was tweaked to try and determine source of multiple print jobs. Items changed on printer:

Device
System Setup
Clearable Warnings (changed from Job to On)
Auto Continue (changed from On to Off)

Access to Vashon restricted

In preparation for the Millenium upgrade, access to Vashon’s web server was restricetd to the main and McIntyre subnets.

Access to Ketron, which had been restricted to the above subnets, was opened to include all subnets.

Micros Server Adjustments

The Micros server, which controls the cash registers in the CBORD One-card system, ran out of disk space on the C: drive, causing serveral services to crash. This in turn caused the cash registers to go offline.

The pagefile was split among the C: and D: drives in the folowing manner: 500 MB on C: and 1 GB on D:, and serveral temp files and an old log file were deleted.

The NT backup was moved from 3:30 AM to 2:30 AM, because it was not finishing before opening time for the cafe and diner.

Modified network interfaces on mail server

On Monday, November 3 between noon and 1pm, the ce interfaces on the University mail server were reset to 100 MB full duplex along with ports 13 and 15 on mc007#2 switch. The port statistics on the switch were also reset.

As a result there have been zero collisions to date on either port of the switch, and user unknown errors have decreased.

Vlan Expansion

The Micros Server and cash registers were moved into a separate vlan on October 28, 2003 in an attempt to overcome system problems. The cash registers were having problems operating on-line in the congested Wheelock subnet. After some period of time, the cash registers would lock up, failing to process transactions.

The cash registers connection to the Micros Server seems much more stable now that they are in a separate vlan with a much smaller broadcast domain.

Vlan Implementation

On October 17, 2003, the network in Trimble Hall was divided into two vlans. Vlan 116 contains all the Resnet ports while Vlan 100 contains the public ports located in the Forum and Seminar Rooms. The uplink from Trimble to the Core router was converted from a Layer 3 routed port to a Layer 2 Vlan Switched port. This configuration will be the prototype for the rest of the campus network.

Cascade web interface down

The Cascade web interface was down for a several hours on the morning of October 8, 2003 following the regularly scheduled Information Services preventative maintenance period.

This outage was partially the result of some miscommunication within Information Services, and the improper shutdown of the Oracle Application Server on the effected system, Camano.

The backup server was brought on line around 11:00 am and remained up until late in the afternoon. The issues surrounding the failure are being addressed both within Information Services and with Oracle.

The production instance of the Oracle Application Server issue was resolved, and brought back on-line by the 5:00 pm.
Continue reading