What It's Like to Get Hit With a DDoS Attack - An Inside View

What It's Like to Get Hit With a DDoS Attack - An Inside View
Ron Meyran

It is not always obvious to a network or system administrator that the company's infrastructure is under attack.  The prime suspect of a network slowdown is technical problem or traffic congestion. An attack usually has a buildup stage, and only as the attack progresses further and sustains will someone get to the right conclusion. Below is an attack scenario as described hour-by-hour by a system administrator of a bank under a DDoS attack.

See Also: Live Webinar | Cybersecurity in Healthcare Supply Chains: A CISO Perspective

5:30 a.m.

I am awakened by the sound of an incoming SMS message on my phone.  It reads, "Warning, mainapp server at 30% maximum load."

Such a message is an automatic notification sent by the new server health-monitoring tool we recently installed, while mainapp is the principal online banking application Web server that handles customer requests. Since our CEO has strategically decided to promote online banking and launched a marketing campaign to encourage customers to use the online banking application, the bank has invested a great deal of money to ensure that the mainapp banking application web server is robust, scalable, and highly available. So far, it seems to have enough processing power and memory to handle current traffic, as last month's statistics showed a server load of no more than 15%.

Receiving a message indication that server load is at 30% is worrisome, but not serious. It is possible that the alert threshold parameters were set incorrectly in the monitoring tool, but I can wait to check that when I get to the office later.

6:00 a.m.

Only a half hour later another SMS message arrives.  This one reads "Warning, mainapp server at 50% maximum load."  Something is definitely wrong.

Since I did not configure remote access to the health-monitoring tool, I cannot look at its logs.  While rushing to get to the office to investigate, I run through the possible causes of such high server load. I try to assure myself that it is probably a simple configuration error, but I begin to worry. My phone rings - it is one of my co-workers, another network administrator. She received the same warning notification as I did and wants to know whether I am aware of the situation.

7:00 a.m.

The customer support manager on duty calls me while I am still on my way, reporting that many customers are calling to complain that the online banking website is significantly slower than usual. He says that one of the customers is furious because he was unable to perform a time-sensitive money transfer as quickly as usual, and that he switched to online banking so he could avoid that type of problem. Finally I arrive at the office, and rush to a server terminal screen.  Mainapp's load has reached 70% - nearly maximum.

Upon a quick check of the health monitoring tool logs, I find out that the alert thresholds are set correctly.  Online banking traffic is still appearing abnormally high, so this is not an alert threshold issue.  Thousands of connections have been opened to the server, requesting different pages on the online banking Website.

A few beads of sweat drip down my forehead as I try not to panic.  Such a massive amount of network traffic must be originating from a malicious source, but why?  Who is behind it?  I suddenly remember last week's newspaper headlines, detailing the wave of cyber attacks on financial services. I immediately recall similarities between what our server is experiencing and what I remember reading about in the papers, as I begin to fear that our server is being targeted by a denial-of-service attack.

8:00 a.m.

Assuming the worst, I begin to try and identify the nature and source of the malicious traffic.  First, I check where the connections are originating from and try to isolate the attackers' IP addresses, in order to differentiate the legitimate from the malicious traffic.  Meanwhile, my phone has not stopped ringing.

The CIO calls wanting to know what is going on; I tell him that I am trying to solve the problem but that we might be under a denial-of-service attack that's exhausting our server's resources.  He does not respond, and I feel a moment of hopelessness.  He just tells me that the problem needs to be solved quickly, before the CEO gets involved.

I have no clue how to stop the attack, and I am not even sure that it is actually denial-of-service.  I've never seen anything like this in my entire career.  My only knowledge on the subject comes from some reading I did on the Internet after attending last month's security seminar.

Looking at the IP trace, it seems that the malicious connections are coming from various different sources.  Each IP is repeatedly sending HTTP GET requests for various online banking pages, and this action is hogging all of mainapp's resources making the online banking pages slow for legitimate users.

With some idea of what is going on, I decide on a short-term plan of action and call an emergency team meeting.

8:30 a.m.

The situation has not gotten any better.  The pace of the attack has been constant, but now mainapp hardly responds to any kind of request.  The customer support manager at my office is upset, as all of his staff is being overwhelmed by support calls.  Customers are unhappy and angry, but what can he instruct them to say?  I tell him that I think we are under attack by one or more hackers, that we should not expect to regain normal service soon, and that we may release a formal statement in the near future regarding our downtime.

9:00 a.m.

The situation has now become catastrophic.  Word has spread, and the entire staff is in a state of panic.  The emergency meeting I called convenes; it consists of the CIO, CTO, network administrators, security manager, application manager, and system administrators (including me).  We are tense, but understand that we have to issue an official message to the customers and decide on a plan of action to deal with the attack. I show everyone the logs, and after a few minutes the security manager notices that some of the malicious requests are coming from Russia. Quickly, I define a rule on the mainapp web server to reject all requests originating from Russia thinking it may slow down the attack.  Unfortunately, it doesn't help.  After activating my new filter, I see no decrease in the amount of malicious traffic.  After a brief period with no new connections, additional connections begin to originate from a dozen different countries, including ours!

9:30 a.m.

The server is still under heavy load; obviously, blocking IPs based on geographic region did not help, so we have to look for another solution. Understanding that we were not prepared to handle such an attack, it has become necessary to gain further understanding of how to prevent and mitigate a denial-of-service attack.

10:00 a.m.

The mainapp Web server is completely flooded, and the online banking site is offline.  Upon this news, the CEO decides to get involved.  She emphasizes how bad it is for the bank's reputation to announce such an attack, and wonders how much it will cost the bank in revenue loss and customer dissatisfaction.  She is worried that if the details of this attack leak to the press it could cause panic among the bank's customers.  She reiterates that the attack must be mitigated quickly, by whatever means necessary.

10:15 a.m.

It is now clear to me that we are facing a well coordinated DDoS attack and that our current security tools cannot mitigate this attack. I also realize that we never faced such an attack before and that although DDoS is a rising security threat we don't have the right expertise in the organization to deal with something of this magnitude.

The above scenario was originally printed in our "DDoS Survival Handbook" which can be downloaded for free through our online resource which provides a comprehensive analysis on denial-of-service (DoS) and distributed denial-of-service (DDoS) attack tools, trends and threats.

Ron Meyran manages the product marketing and management activities of Radware's security division. Mr. Meyran leads the strategic plan of Radware's IPS solutions for the enterprise, eCommerce and carrier markets.

Prior to joining Radware as Product Director in 2003, Mr. Meyran worked at BrightCom Technologies, where he served as Product Manager for the company's Bluetooth product line based on a fabricated chipset and software. He has also acted as a senior communications and security consultant for projects spanning carrier network planning to secured sites designed for of financial enterprises.

About the Author

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.