Dealing with Pacific Bell

  Locations of visitors to this page
be notified of website changes? subscribe

Dealing with Pacific Bell

Path: sun.sirius.com!usenet
From: Don Hurter
Newsgroups: sirius.tech
Subject: Dealing with Pacific Bell
Date: 27 Jun 1995 02:56:27 GMT
Organization: Sirius Connections
Lines: 342

The following was written to try to explain our situations with Pacific Bell, who carries our main data circuits at the moment. I wish to speak of my own opinions and experiences, and not Sirius Connections as an organization, yet most of this also pertains to what our company has had to endure. It does not speak very favorably of PacBell, but at this point, with one of our primary lines down for the nth time and the better part of a day, I'm in no mood to sugar-coat the problems. I don't wish to indict any particular PacBell employee, as they come both good and bad, but instead will focus my views on Pacific Bell the company, whose disorganization and questionable management lead to the situations described. It is also getting late after a hard day of troubleshooting and anxiously waiting for our circuits to be repaired, so my editing might be lacking in parts. I feel after all our customers have been through, I owe them this explanation.

27-Jun-95 00:15 D.H.

We have been plagued by disconnections from Pacific Bell, impacting both our dial-up phone lines and our T-1 data lines. We have known for months that we needed redundant connections all around, and have worked with PacBell, our upstream providers, and other network vendors to establish them. However, in the process of planning such work, and in maintaining our day to day operations, we have still had our service disrupted in one manner or another by PacBell, almost every other week.

Sirius has had a fibre-optic cables installed in the building, along with a refrigerator-sized multiplexor installed below the telco closet. Most of this has been in place for a month and a half, but it takes considerable work and time to order circuits delivered over fibre (to date our T-1's have been delivered over copper wires, which run within the same trunks as the dial-up phone lines.) We have a delivery date of mid-July to add new redundant connections, so we have been hoping that everything would stay working until that time. Pacific Bell, unfortunately, simply won't leave things alone.

Over the past 4 months we have had our phone lines disconnected, sometimes for weeks, re-routed to other buildings, or else had the modem-hunting broken. Our data lines have also been disconnected, tapped into by phone installers, mislabeled, mis-routed, and intermittantly dropped, causing extremely difficult troubleshooting efforts. We have ordered a secondary minimum point of entry, which is our own private phone service wired independently from the rest of the building, only to have the equipment installed but not connected to the PacBell central office. Every third time we order new lines, the installers disables some existing service. When the repair people show up, they often have incorrect records, and we end up carefully explaining to them the history of our wiring problems, the cable routing from our building to the CO, and which lines not to touch when they bulldoze their way around the telco closet. They also have tried to place the blame on us and our in-house wiring, when in fact our equipment is better installed and labeled than most of the work they do. We estimate that we lose at least one solid man-week per month correcting PacBell screwups.

Lest one think I'm simply making a one-sided diatribe against Pacific Bell, I'll give them some credit as well. First and foremost, our main sales rep has done an incredible job working with the situation handed to him. He has consistently worked late into the evening helping us resolve problems, has done very methodical detective work finding us copper lines for new orders when the records show no pairs available in the main cables, and helped us sort out a very complex changeover when we moved all of our phone lines away from an interface box to a direct underground cable. PacBell also employs a few installers/repair people who have gone the extra mile to correct problems outside of their control. They have worked double shifts at a time when PacBell is reducing their work force. Then there is the cable splicer/fibre installer who would call us to see if everything was working properly, even when a particular job wasn't supposed to have anything to do with our lines. And finally PacBell has pulled fibre to our building at their cost to provide us with room to grow, even though the circuits we are running over it hardly pays for their investment. By and large, most of the PacBell personnel we have dealt with have been courteous and at least apologetic when they realize PacBell's mistakes.

But Pacific Bell has repeatedly screwed up our service, and done so with such regularity that we will no longer politely stand by while they try to fix things. We have been maintaining a database of service outages, and record in each entry the source of the trouble, when it gets fixed, and what followup is necessary to prevent future problems. Except for two incidents of Sirius equipment failure, all of the outages our customers suffer have been caused by some sort of PacBell error. And despite any measures we may take to protect against repeated failures, they still manage to find new ways to disrupt our service. The vandalism of the interface box could have been wholly avoided if they simply locked the box up on Saturday, when I happened to walk by it and notice the access doors open, and called it into their priority repair line.

The most distressing things for us is the effect these outages have on our day-to-day operations. First it disrupts you the customers from doing what you set out to do on the internet. Our reputation is at stake, and it appears like we cried wolf in the past when our circuits are taken down. We get tired of dragging out the same old excuse, 'PacBell screwed up our lines again', and you get tired of hearing it. But the truth is that PacBell runs a sloppy operation, and we suffer as a result.

Second, the outages essentially ruin our workdays when they occur, since we have to focus all of our attention and efforts on rectifying the problem. We spend a lot of time running from the interface box to the telco closet up to our office checking the routers, answering angry customers who are wondering what happened to the connection, and spending every other available minute on the phone with PacBell troubleshooting the problem or getting a status update. We never know if the outage will be for five minutes or five hours, so when customers call we can't tell the complete story. Sometimes it may take us one or two hours just to pinpoint the problem, since it usually takes that long for PacBell to respond.

Finally, these disruptions set back our future efforts, since we have to spend double the time and resources to correct immediate problems, rather than proceed at a calm and methodical manner. It sometimes feels like Zeno's paradox, where we approach the finish line but cannot cross it. Three weeks from now we'll have redundant connections and more robust routers online, but every day leading up to that point throws another gauntlet in our path.

People ask if we can avoid connecting through Pacific Bell, and instead pass our data through another access provider. There are a limited number of reliable data networks available that connect one site to another, and we have talked with them all. We currently connect to The Little Garden via Metropolitan Fiber Systems, a nationwide network carrier. However, the final wires that enter our building (known as the local loop), passes over PacBell facilities, who finish the connection for MFS. No other carrier reaches into each and every building like PacBell; if you want circuits, one way or another you'll wind up using them. The only exception is for ISPs to co-locate their operations in an existing fiber building and get their circuits directly from the alternative carriers. Sirius is implementing such a plan for future expansion, and has even signed leases with other carriers, but that will take a few months to get operational. We won't divulge who we're co-locating with, but hope to make an announcement around September.

Standard analog dial-tone is another area where one might look to another source for better service. But like the paraphrased laws of thermodynamics - you can't win, you can't break even, and you can't get out of the game. Pacific Bell owns almost all of the statewide telephone facilities, and there is no way to avoid using their pervasive network to establish modem connections. This was supposed to all change at the beginning of this year, and again Sirius did some hard investigation of alternatives. The reality for dial-tone access is even more lopsided than data services, since any alternative access provider has to essentially 'rent' phone circuits from PacBell if they wish to connect up with _any_ PB customer. (Raise your hand if you get your phone service from PacBell... Q.E.D.)

In the meantime, we are doing everything we can to prevent any further disruptions, but again, Pacific Bell makes our work extremely difficult when they indiscriminately take down our lines. The lost hours, money, sweat, and customer goodwill is something we have a hard time writing into our budgets; how many hours downtime shall we plan for this month? There is a light at the end of the tunnel, only we're not sure if it's the other end, or the headlight of PacBell's oncoming Internet services. We would love nothing better than to promise our customers uninterrupted Internet connectivity, and focus our efforts on improving services rather than repairing problems. But lately it's been a hell of a struggle for all concerned, and we apologize for dragging you into this.

Some anecdotes of service disruptions we have experienced:


When our frame-relay connection was about to be installed at our building, some technicians arrived at 5 P.M. after a full day of work (The installers start work at 6 - 7 A.M.) We didn't want them to burn themselves out trying to get the circuit up that evening, so we told them to return the following morning and get a fresh start. One tech left to finish up another small job, and the other went to our interface box (also known as the 'B' box) on 4th and Townsend, where all our wires passed through at the time. At around 7 P.M. our main T-1 to TLG went down, setting off trouble lights on some of our equipment. We reported the problem to PacBell, who four hours later determined that some wires we removed at the B box. The technician who left to prepare the intermediate cross-connects had removed our primary T-1 wires, thinking she could use the terminals for the frame relay circuit. The problem wasn't completely corrected until the following morning.


Earlier this year our building started running out of copper phone lines that ran to the Minimum Point Of Entry (MPOE, also known as the primary customer demarcation point in the telco closet.) We asked PacBell what we should do to boost the capacity, and after much discussion and analysis PB told us that the building itself still had plenty of copper pairs available (a theoretical 600 lines), but the bottleneck was at the B box on 4th Street, which was at maximum capacity for the entire neighborhood. We knew we could get 24 analog lines carried over a single T-1 line, which only requires 4 copper wires, so that would give us many lines while using up few wires. We needed to buy a digital channelbank which splits out the lines, alone costing $3-5 K. Facing a shortage of copper we went ahead and ordered the lines and the channelbank, and waited the three weeks it took for the T-1 to show up. When it was ready to be installed, the same technician who previously took down our main T-1 did it again! She disconnected the exact same lines for the second time in the B box. That problem alone took 6 hours to resolve.


When the 24-channel T-1 did show up, it only carried 16 out of the 24 lines ordered. It was three weeks before the remaining lines showed up. Before they did, one of the phone lines caused a tremendous relay stutter in the channel bank when it was in use. We swapped channel cards, studied the cryptic technical manuals, and still couldn't get the line to work. We finally called PacBell to see if they knew what the problem was, and they sent out a repair tech who would charge $35 per fifteen minuteintervals if it turned out to be a customer equipment problem. He spent three hours talking to someone back in the central office, having them swap equipment at their end. They finally confessed that there was a bad card in their frames, and swapped in a new one.

Three weeks later when the remaining eight lines showed up, one of them again exhibited the same symptom of the relay stutter. Once again we swapped equipment around, and finally called PB in to have a look. After another couple of hours of testing, lunch breaks, and more testing they discovered that whoever removed the defective card the first time simply swapped it with one that was in the position of the remaining eight lines, which hadn't been installed at the time. This time they put in a new card (or maybe swapped it with some other customer's unit.)

We eventually ended up learning that 28.8k modems don't work reliably when attached to channelbanks, because of the multiple A/D/D/A/D/A conversions, and sold the unit. We never recovered the $2000 install charge for the 24 channel T-1, or the other monthly costs associated with experimenting with it.


The larger solution to the phone line shortage at the B box was to have Pacific Bell run a dedicated underground trunk directly from the Folsom Street central office directly to our building. They would splice all our lines over from the old wires running from the B box to the new wires, which would show up right outside our building. This process took months to plan, and we were on the phone with our sales rep sometimes twice every day just to go over the logistics. We knew that the process of cutting over modem lines while they were in use would be a tricky business, and ordered 96 new lines over the new cable so that they would show up first and give us some surplus capacity while we took our existing lines out of service. In the meantime we had to put all new customer registrations on hold, or else oversell the modems and give everyone busy signals. The delivery date for the new cable slipped, and we sweated each day until the new lines showed up, losing some existing and potential customers because of the limited modem pool. We also had to restructure our dial-up hunt groups to accommodate dedicated service clients who we had put on hold for months, and incurred hundreds of dollars of service charges each time we touched the hunting sequence.

When the new lines did arrive, a four week circus set up shop outside our building at the external splice box in the street. There were all manners of PacBell trucks, cable rigs, service tents, and other odd equipment parked outside each day while the splicing job commenced. Many of the circuit records describing our building's phone lines were incorrect, leaving the splicers with a live Russian-Roulette game of Guess-Who-This-Wire-Belongs-To. During that period we lost two to three lines per day, and had to jump around them with emergency line forwarding. The main splicer, who did a heroic job of working with the out-of-date records, knew each and every one of our lines intimately by the time he was done, and often reported to us trouble areas before we could discover them ourselves. Part of the task was confounded by intermittent broken wires, which worked one day and not the next. We tried to keep track of which lines got lost, but couldn't keep up with the ever-changing situation. At one point our emergency trouble line was swapped with someone's fax line in a building down the street, and it rang for a day and a half with expectant faxes.

The splice job affected our service for months afterwards, and for technical reasons our T-1s were never transferred from the interface box onto the direct underground cable. This fact alone would become the root cause for another half-dozen network outages, while we waited for the fiber to be delivered to our building.


After we cleaned up the lines and hunt groups from the splice job, we ordered a new round of lines to fill in gaps that developed when we ran out of copper. Prior to the new cable installation, we had ordered groups of lines, five at a time, in hopes that PacBell could at least tweak the existing cable to try to fill these small orders. Our sales rep was responsible for a number of these orders to be fulfilled through persistent prodding of the CO facilities managers and record keepers who knew which pairs were marked as unusable for whatever reason. To their credit, the installers squeezed out most of the orders until they physically ran out of useable pairs, which helped us through the following month until the new trunks arrived. However, the five-line orders arrived out of sequence, so our hunt groups had confusing gaps from one set of numbers to the next. When the last five lines did arrive, a month after all the splicing, the installer disconnected one of our T-1s thinking he could use those binding posts to attach the new wires. On top of that, one of the five lines was dead, with no dial-tone at the demarcation point.

We discovered this problem a few weeks later when we went to put those lines in service (they were laying dormant while we attended to other matters.) We called in the trouble, and the following day a PacBell tech showed up to try to revive the line. at the end of the day he told us the line was up again, and we took his word for it. The next morning when I went to connect our phone wires to the terminals I found no dial tone. I called in the trouble again, and at first PB denied that there was a problem. They later sent someone out and we explained the problem again and left them to fix it. At the end of the day I went down to the phone room to find it locked with no sign of progress. The next day I called again, and a different tech showed up. I went over the problem once again, and left him alone. Again he left without correcting the problem. I thought that maybe this problem was becoming an elimination competition among PacBell repairman, and no one yet had showed the strength to defeat this mighty wiring problem. Two more repair trips later they finally got the problem fixed.


One evening while we were working on the servers, one of our employees called in from home to tell us he got a busy signal while dialing in. I found that hard to believe, since we had a comfortable over-capacity of modems following all the splicing work. The modems themselves showed that people were still online, so we couldn't figure out where the problem lied. We called the main 14.4k dialup number, and the numbers immediately following it, and encountered busy signals at each line. The hunting normally forwarded each call to the next available modem in the pool that wasn't in use, but that evening no calls seemed to be forwarding. We thought at first that maybe there was some sort of catastrophic cable failure in the neighborhood, but the PacBell priority service line saw no such trouble. They later discovered that someone had received an order to rearrange our multi-hundred line hunt group, which caused all busy call-forwarding to completely break down. The shift supervisor tried to calm down the sobbing programmer who was already working well past here normal hours and was trying to finish the job to get home. They had to then assign someone else to completely reconstruct our hunting sequences from scratch, after we scrambled to fax them a thorough description of which line forwarded to what. We gave up at around 1 A.M. from fatigue, while customers were essentially locked out from calling in all evening.


Two Fridays ago our frame-relay connection dropped (which was running over copper), and we reported the problem to the priority repair service. They sent out a technician, who fiddled with the wires for a few hours before disappearing at midnight without fixing the circuit. On Saturday someone different came by, couldn't solve the problem, and left. We started fresh on Monday morning, and a new PacBell supervisor came by, claiming 'he wouldn't leave until the problem was solved.' After working with test equipment at the demarcation point in the phone closet, he claimed our in-house wiring was bad. I explained to him that we had swapped wires, equipment, jacks, and everything else under our control, yet he still insisted on wasting half a day trying to place the blame on our wires. At 6 P.M. he packed up and left, accomplishing nothing. Our frame-relay customers, in the meantime, were still left without a working connection. Tuesday morning PacBell decided they couldn't get the copper connection to work again (evidently, some other PB technician tapped into the wires on Friday, causing the whole mess, but for some reason they couldn't rectify the problem as of Tuesday.) We then had the circuit redesigned to run over the new fiber-optic cable, a risky move since the equipment hadn't been fully tested, but was the only option available at that point.

We had been planning for months to switch that circuit over to the fiber, in a calm manner while we coordinated with our frame-relay customers. Instead we had to scramble to solve their problems in a hurried manner, which affected our customers and caused us many lost days of work.


Today (Jun 26) we discovered that another T-1 had gone down, this time the result of the infamous PacBell interface box getting vandalized, and our wires pulled loose. As of this writing PacBell has had two shifts of repair technicians work on the problem, and we have escalated the problem to the highest levels both within PacBell and MFS who runs the circuit. They have managed to take our frame-relay circuit down while they work on the multiplexer, and as of midnight still have not resolved the problem. The interface box is damaged beyond repair, which could have been entirely avoided if they simply locked it back up on Saturday after I reported it open. Once again we will be transferring a circuit over to the fiber, which we had hoped to do in a more graceful manner when another connection gets added to take the load, but this has become an emergency situation with no alternatives available.


There are far more incidents of minor problems (by comparison) which we have logged into a trouble database. We have grown wary of anything PacBell does at this point, and will continue to pursue alternate network arrangements from other vendors. If all goes well we will have at least one redundant connection in the middle of July, which will prevent 95% of the types of outages we have experienced. If we knew that our service was going to be this problematic we would have done so much earlier, but we mistakenly formed a certain amount of trust around PacBell's promises that none of this would ever happen again. We've learned a tough lesson through all this, and have perhaps over-engineered our future plans as a result, but we hope you now appreciate some of the problems that were mostly beyond our control that we have had to deal with.

Thank you for your continued understanding, and good night.

-- Don

Have you found errors nontrivial or marginal, factual, analytical and illogical, arithmetical, temporal, or even typographical? Please let me know; drop me email. Thanks!
 

What's New?  •  Search this Site  •  Website Map
Travel  •  Burning Man  •  San Francisco
Kilts! Kilts! Kilts!  •  Macintosh  •  Technology  •  CU-SeeMe
This page is copyrighted 1993-2008 by Lila, Isaac, Rose, and Mickey Sattler. All rights reserved.