|
Thursday, October 31, 2013
NSA MUSCULAR: What else do we know?
Monday, October 28, 2013
CIA/NSA Special Collection Service
28 October 2013
CIA/NSA Special Collection Service
The Special Collection Service is a joint CIA-NSA surreptitious entry agency which breaks into targeted facilities to steal secret information.
http://cryptome.org/2013/10/nsa-cia-berlin-spy-nest.pdf
CIA/NSA Special Collection Service
The Special Collection Service is a joint CIA-NSA surreptitious entry agency which breaks into targeted facilities to steal secret information.
http://cryptome.org/2013/10/nsa-cia-berlin-spy-nest.pdf
Saturday, October 26, 2013
NSA Close Access Sigads for European Targets
A sends:
Date: Fri, 25 Oct 2013 01:21:29 -0700 (PDT)
From: xxxxx[at]efn.org
To: cryptome[at]earthlink.net
Subject: Close Access Sigads; The logic puzzle from hell
"One document lists 38 embassies and missions, describing them as "targets" '. This is the one:
http://www.theguardian.com/world/2013/jun/30/nsa-leaks-us-bugging-european-allies
Based on multiple stories, I have pieced together a significant portion of the document. I have enclosed the table, and supporting document showing my logic. I have probably gotten the lines out of order, but everything I have is accounted for in the supporting articles.
Brazil and France I have copied directly, I have previously sent the screen capture from Bom Dia Brazil, but re-enclose it anyhow. There are two documents that can be used to tell the same story in each new country. This is one, the other is BOUNDLESSINFORMANT. It makes them valuable, so only pieces get shown at a time. Greenwald and Co have been milking them for all they're worth.
Date: Fri, 25 Oct 2013 01:21:29 -0700 (PDT)
From: xxxxx[at]efn.org
To: cryptome[at]earthlink.net
Subject: Close Access Sigads; The logic puzzle from hell
"One document lists 38 embassies and missions, describing them as "targets" '. This is the one:
http://www.theguardian.com/world/2013/jun/30/nsa-leaks-us-bugging-european-allies
Based on multiple stories, I have pieced together a significant portion of the document. I have enclosed the table, and supporting document showing my logic. I have probably gotten the lines out of order, but everything I have is accounted for in the supporting articles.
Brazil and France I have copied directly, I have previously sent the screen capture from Bom Dia Brazil, but re-enclose it anyhow. There are two documents that can be used to tell the same story in each new country. This is one, the other is BOUNDLESSINFORMANT. It makes them valuable, so only pieces get shown at a time. Greenwald and Co have been milking them for all they're worth.
Supporting document: http://cryptome.org/2013/10/nsa-close-access-sigads-eu.pdf
Table:
Screen capture:
Tuesday, October 22, 2013
Fresh NSA Leak on Mexico President Spying
|
Thursday, October 17, 2013
Candidate for Next NSA Head
Candidate for Next NSA Head
According to Reuters:
http://www.reuters.com/article/2013/10/16/us-usa-nsa-transition-idUSBRE99F12W20131016
US Fleet Cyber Command US 10th Fleet:
http://cryptome.org/2013/10/fcc-10th.pdf (1.9MB)
http://www.navy.mil/navydata/bios/navybio.asp?bioID=434
Vice Admiral Michael S. Rogers
Commander, U.S. Fleet Cyber Command
Commander, U.S. 10th Fleet
MONTEREY, Calif. (Jan. 31, 2012) Vice Adm. Michael S. Rogers, commander of U.S. Fleet Cyber Command and U.S.10th Fleet, speaks to students and staff at the Center for Information Dominance, Unit Monterey, during an all-hands call. (U.S. Navy photo by Mass Communication Specialist 1st Class Nathan L. Guimont/Released) (Source)
Vice Adm. Rogers is a native of Chicago and attended Auburn University, graduating in 1981 and receiving his commission via the Naval Reserve Officers Training Corps. Originally a surface warfare officer (SWO), he was selected for re-designation to cryptology (now Information Warfare) in 1986.
He assumed his present duties as commander, U.S. Fleet Cyber Command/commander, U.S. 10th Fleet in September 2011. Since becoming a flag officer in 2007, Rogers has also been the director for Intelligence for both the Joint Chiefs of Staff and U.S. Pacific Command.
Duties afloat have included service at the unit level as a SWO aboard USS Caron (DD 970); at the strike group level as the senior cryptologist on the staff of Commander, Carrier Group Two/John F. Kennedy Carrier Strike Group; and, at the numbered fleet level on the staff of Commander, U.S. 6th Fleet embarked in USS Lasalle (AGF 3) as the fleet information operations (IO) officer and fleet cryptologist. He has also led cryptologic direct support missions aboard U.S. submarines and surface units in the Arabian Gulf and Mediterranean.
Ashore, Rogers commanded Naval Security Group Activity Winter Harbor, Maine (1998-2000); and, has served at Naval Security Group Department; NAVCOMSTA Rota, Spain; Naval Military Personnel Command; Commander in Chief, U.S. Atlantic Fleet; the Bureau of Personnel as the cryptologic junior officer detailer; and, Commander, Naval Security Group Command as aide and executive assistant (EA) to the commander.
Rogers’ joint service both afloat and ashore has been extensive and, prior to becoming a flag officer, he served at U.S. Atlantic Command, CJTF 120 Operation Support Democracy (Haiti), Joint Force Maritime Component Commander, Europe, and the Joint Staff. His Joint Staff duties (2003-2007) included leadership of the J3 Computer Network Attack/Defense and IO Operations shops, EA to the J3, EA to two Directors of the Joint Staff, special assistant to the Chairman of the Joint Chiefs of Staff, director of the Chairman’s Action Group, and a leader of the JCS Joint Strategic Working Group.
Rogers is a distinguished graduate of the National War College and a graduate of highest distinction from the Naval War College. He is also an Massachusetts Institute of Technology Seminar XXI fellow and holds a Master of Science in National Security Strategy.
According to Reuters:
http://www.reuters.com/article/2013/10/16/us-usa-nsa-transition-idUSBRE99F12W20131016
US Fleet Cyber Command US 10th Fleet:
http://cryptome.org/2013/10/fcc-10th.pdf (1.9MB)
http://www.navy.mil/navydata/bios/navybio.asp?bioID=434
Vice Admiral Michael S. Rogers
Commander, U.S. Fleet Cyber Command
Commander, U.S. 10th Fleet
MONTEREY, Calif. (Jan. 31, 2012) Vice Adm. Michael S. Rogers, commander of U.S. Fleet Cyber Command and U.S.10th Fleet, speaks to students and staff at the Center for Information Dominance, Unit Monterey, during an all-hands call. (U.S. Navy photo by Mass Communication Specialist 1st Class Nathan L. Guimont/Released) (Source)
Vice Adm. Rogers is a native of Chicago and attended Auburn University, graduating in 1981 and receiving his commission via the Naval Reserve Officers Training Corps. Originally a surface warfare officer (SWO), he was selected for re-designation to cryptology (now Information Warfare) in 1986.
He assumed his present duties as commander, U.S. Fleet Cyber Command/commander, U.S. 10th Fleet in September 2011. Since becoming a flag officer in 2007, Rogers has also been the director for Intelligence for both the Joint Chiefs of Staff and U.S. Pacific Command.
Duties afloat have included service at the unit level as a SWO aboard USS Caron (DD 970); at the strike group level as the senior cryptologist on the staff of Commander, Carrier Group Two/John F. Kennedy Carrier Strike Group; and, at the numbered fleet level on the staff of Commander, U.S. 6th Fleet embarked in USS Lasalle (AGF 3) as the fleet information operations (IO) officer and fleet cryptologist. He has also led cryptologic direct support missions aboard U.S. submarines and surface units in the Arabian Gulf and Mediterranean.
Ashore, Rogers commanded Naval Security Group Activity Winter Harbor, Maine (1998-2000); and, has served at Naval Security Group Department; NAVCOMSTA Rota, Spain; Naval Military Personnel Command; Commander in Chief, U.S. Atlantic Fleet; the Bureau of Personnel as the cryptologic junior officer detailer; and, Commander, Naval Security Group Command as aide and executive assistant (EA) to the commander.
Rogers’ joint service both afloat and ashore has been extensive and, prior to becoming a flag officer, he served at U.S. Atlantic Command, CJTF 120 Operation Support Democracy (Haiti), Joint Force Maritime Component Commander, Europe, and the Joint Staff. His Joint Staff duties (2003-2007) included leadership of the J3 Computer Network Attack/Defense and IO Operations shops, EA to the J3, EA to two Directors of the Joint Staff, special assistant to the Chairman of the Joint Chiefs of Staff, director of the Chairman’s Action Group, and a leader of the JCS Joint Strategic Working Group.
Rogers is a distinguished graduate of the National War College and a graduate of highest distinction from the Naval War College. He is also an Massachusetts Institute of Technology Seminar XXI fellow and holds a Master of Science in National Security Strategy.
Monday, October 14, 2013
N.S.A. Director Firmly Defends Surveillance Efforts
NSA Director Defends Spying
http://www.nytimes.com/2013/10/13/us/nsa-director-gives-firm-and-broad-defense-of-surveillance-efforts.html
N.S.A. Director Firmly Defends Surveillance Efforts
By DAVID E. SANGER and THOM SHANKER
Published: October 12, 2013
FORT MEADE, Md. — The director of the National Security Agency, Gen. Keith B. Alexander, said in an interview that to prevent terrorist attacks he saw no effective alternative to the N.S.A.’s bulk collection of telephone and other electronic metadata from Americans. But he acknowledged that his agency now faced an entirely new reality, and the possibility of Congressional restrictions, after revelations about its operations at home and abroad.
While offering a detailed defense of his agency’s work, General Alexander said the broader lesson of the controversy over disclosures of secret N.S.A. surveillance missions was that he and other top officials have to be more open in explaining the agency’s role, especially as it expands its mission into cyberoffense and cyberdefense.
“Given where we are and all the issues that are on the table, I do feel it’s important to have a public, transparent discussion on cyber so that the American people know what’s going on,” General Alexander said. “And in order to have that, they need to understand the truth about what’s going on.”
General Alexander, a career Army intelligence officer who also serves as head of the military’s Cyber Command, has become the public face of the secret — and, to many, unwarranted — government collection of records about personal communications in the name of national security. He has given a number of speeches in recent weeks to counter a highly negative portrayal of the N.S.A.’s work, but the 90-minute interview was his most extensive personal statement on the issue to date.
Speaking at the agency’s heavily guarded headquarters, General Alexander acknowledged that his agency had stumbled in responding to the revelations by Edward J. Snowden, the contractor who stole thousands of documents about the N.S.A.’s most secret programs.
But General Alexander insisted that the chief problem was a public misunderstanding about what information the agency collects — and what it does not — not the programs themselves.
“The way we’ve explained it to the American people,” he said, “has gotten them so riled up that nobody told them the facts of the program and the controls that go around it.” But he was firm in saying that the disclosures had allowed adversaries, whether foreign governments or terrorist organizations, to learn how to avoid detection by American intelligence and had caused “significant and irreversible damage” to national security.
General Alexander said that he was extremely sensitive to the power of the software tools and electronic weapons being developed by the United States for surveillance and computer-network warfare, and that he set a very high bar for when the nation should use them for offensive purposes.
“I see no reason to use offensive tools unless you’re defending the country or in a state of war, or you want to achieve some really important thing for the good of the nation and others,” he said.
Those comments were prompted by a document in the Snowden trove that said the United States conducted more than 200 offensive cyberattacks in 2011 alone. But American officials say that in reality only a handful of attacks have been carried out. They say the erroneous estimate reflected an inaccurate grouping of other electronic missions.
But General Alexander would not discuss any specific cases in which the United States had used those weapons, including the best-known example: its years-long attack on Iran’s nuclear enrichment facility at Natanz. To critics of President Obama’s administration, that decision made it easier for China, Iran and other nations to justify their own use of cyberweapons.
General Alexander, who became the N.S.A. director in 2005, will retire early next year. The timing of his departure was set in March when his tour was extended for a third time, according to officials, who said it had nothing to do with the surveillance controversy spawned by the leaks. The appointment of his successor is likely to be a focal point of Congressional debate over whether the huge infrastructure that was built during his tenure will remain or begin to be restricted.
Senator Patrick J. Leahy, a Vermont Democrat who leads the Senate Judiciary Committee, has already drafted legislation to eliminate the N.S.A.’s ability to systematically obtain Americans’ calling records. And Representative Jim Sensenbrenner, a Wisconsin Republican and co-author of the Patriot Act, is drafting a bill that would cut back on domestic surveillance programs.
General Alexander was by turns folksy and firm in the interview. But he was unapologetic about the agency’s strict culture of secrecy and unabashed in describing its importance to defending the nation.
He insisted that it would have been impossible to have made public, in advance of the revelations by Mr. Snowden, the fact that the agency collected what it calls the “business records” of all telephone calls, and many other electronic communications, made in the United States. The agency is under rules preventing it from investigating that so-called haystack of data unless it has a “reasonable, articulable” justification, involving communications with terrorists abroad, he added.
But he said the agency had not told its story well. As an example, he said, the agency itself killed a program in 2011 that collected the metadata of about 1 percent of all of the e-mails sent in the United States. “We terminated it,” he said. “It was not operationally relevant to what we needed.”
However, until it was killed, the N.S.A. had repeatedly defended that program as vital in reports to Congress.
Senior officials also said that one document in the Snowden revelations, an agreement with Israel, had been misinterpreted by those who believed that it meant the N.S.A. was sharing raw intelligence data on Americans, including the metadata on phone calls. Officials said the probability of American content in the shared data was extremely small.
General Alexander said that confronting what he called the two biggest threats facing the United States — terrorism and cyberattacks — would require the application of expanded computer monitoring. In both cases, he said, he was open to much of that work being done by private industry, which he said could be more efficient than government.
In fact, he said, a direct government role in filtering Internet traffic into the United States, in an effort to stop destructive attacks on Wall Street, American banks and the theft of intellectual property, would be inefficient and ineffective.
“I think it leads people to the wrong conclusion, that we’re reading their e-mails and trying to listen to their phone calls,” he said.
Although he acknowledged that the N.S.A. must change its dialogue with the public, General Alexander was adamant that the agency adhered to the law.
“We followed the law, we follow our policies, we self-report, we identify problems, we fix them,” he said. “And I think we do a great job, and we do, I think, more to protect people’s civil liberties and privacy than they’ll ever know.”
A version of this article appears in print on October 13, 2013, on page A15 of the New York edition with the headline: N.S.A. Director Firmly Defends Surveillance Efforts.
http://www.nytimes.com/2013/10/13/us/nsa-director-gives-firm-and-broad-defense-of-surveillance-efforts.html
N.S.A. Director Firmly Defends Surveillance Efforts
By DAVID E. SANGER and THOM SHANKER
Published: October 12, 2013
FORT MEADE, Md. — The director of the National Security Agency, Gen. Keith B. Alexander, said in an interview that to prevent terrorist attacks he saw no effective alternative to the N.S.A.’s bulk collection of telephone and other electronic metadata from Americans. But he acknowledged that his agency now faced an entirely new reality, and the possibility of Congressional restrictions, after revelations about its operations at home and abroad.
While offering a detailed defense of his agency’s work, General Alexander said the broader lesson of the controversy over disclosures of secret N.S.A. surveillance missions was that he and other top officials have to be more open in explaining the agency’s role, especially as it expands its mission into cyberoffense and cyberdefense.
“Given where we are and all the issues that are on the table, I do feel it’s important to have a public, transparent discussion on cyber so that the American people know what’s going on,” General Alexander said. “And in order to have that, they need to understand the truth about what’s going on.”
General Alexander, a career Army intelligence officer who also serves as head of the military’s Cyber Command, has become the public face of the secret — and, to many, unwarranted — government collection of records about personal communications in the name of national security. He has given a number of speeches in recent weeks to counter a highly negative portrayal of the N.S.A.’s work, but the 90-minute interview was his most extensive personal statement on the issue to date.
Speaking at the agency’s heavily guarded headquarters, General Alexander acknowledged that his agency had stumbled in responding to the revelations by Edward J. Snowden, the contractor who stole thousands of documents about the N.S.A.’s most secret programs.
But General Alexander insisted that the chief problem was a public misunderstanding about what information the agency collects — and what it does not — not the programs themselves.
“The way we’ve explained it to the American people,” he said, “has gotten them so riled up that nobody told them the facts of the program and the controls that go around it.” But he was firm in saying that the disclosures had allowed adversaries, whether foreign governments or terrorist organizations, to learn how to avoid detection by American intelligence and had caused “significant and irreversible damage” to national security.
General Alexander said that he was extremely sensitive to the power of the software tools and electronic weapons being developed by the United States for surveillance and computer-network warfare, and that he set a very high bar for when the nation should use them for offensive purposes.
“I see no reason to use offensive tools unless you’re defending the country or in a state of war, or you want to achieve some really important thing for the good of the nation and others,” he said.
Those comments were prompted by a document in the Snowden trove that said the United States conducted more than 200 offensive cyberattacks in 2011 alone. But American officials say that in reality only a handful of attacks have been carried out. They say the erroneous estimate reflected an inaccurate grouping of other electronic missions.
But General Alexander would not discuss any specific cases in which the United States had used those weapons, including the best-known example: its years-long attack on Iran’s nuclear enrichment facility at Natanz. To critics of President Obama’s administration, that decision made it easier for China, Iran and other nations to justify their own use of cyberweapons.
General Alexander, who became the N.S.A. director in 2005, will retire early next year. The timing of his departure was set in March when his tour was extended for a third time, according to officials, who said it had nothing to do with the surveillance controversy spawned by the leaks. The appointment of his successor is likely to be a focal point of Congressional debate over whether the huge infrastructure that was built during his tenure will remain or begin to be restricted.
Senator Patrick J. Leahy, a Vermont Democrat who leads the Senate Judiciary Committee, has already drafted legislation to eliminate the N.S.A.’s ability to systematically obtain Americans’ calling records. And Representative Jim Sensenbrenner, a Wisconsin Republican and co-author of the Patriot Act, is drafting a bill that would cut back on domestic surveillance programs.
General Alexander was by turns folksy and firm in the interview. But he was unapologetic about the agency’s strict culture of secrecy and unabashed in describing its importance to defending the nation.
He insisted that it would have been impossible to have made public, in advance of the revelations by Mr. Snowden, the fact that the agency collected what it calls the “business records” of all telephone calls, and many other electronic communications, made in the United States. The agency is under rules preventing it from investigating that so-called haystack of data unless it has a “reasonable, articulable” justification, involving communications with terrorists abroad, he added.
But he said the agency had not told its story well. As an example, he said, the agency itself killed a program in 2011 that collected the metadata of about 1 percent of all of the e-mails sent in the United States. “We terminated it,” he said. “It was not operationally relevant to what we needed.”
However, until it was killed, the N.S.A. had repeatedly defended that program as vital in reports to Congress.
Senior officials also said that one document in the Snowden revelations, an agreement with Israel, had been misinterpreted by those who believed that it meant the N.S.A. was sharing raw intelligence data on Americans, including the metadata on phone calls. Officials said the probability of American content in the shared data was extremely small.
General Alexander said that confronting what he called the two biggest threats facing the United States — terrorism and cyberattacks — would require the application of expanded computer monitoring. In both cases, he said, he was open to much of that work being done by private industry, which he said could be more efficient than government.
In fact, he said, a direct government role in filtering Internet traffic into the United States, in an effort to stop destructive attacks on Wall Street, American banks and the theft of intellectual property, would be inefficient and ineffective.
“I think it leads people to the wrong conclusion, that we’re reading their e-mails and trying to listen to their phone calls,” he said.
Although he acknowledged that the N.S.A. must change its dialogue with the public, General Alexander was adamant that the agency adhered to the law.
“We followed the law, we follow our policies, we self-report, we identify problems, we fix them,” he said. “And I think we do a great job, and we do, I think, more to protect people’s civil liberties and privacy than they’ll ever know.”
A version of this article appears in print on October 13, 2013, on page A15 of the New York edition with the headline: N.S.A. Director Firmly Defends Surveillance Efforts.
Wednesday, October 9, 2013
Meltdowns Hobble NSA Data Center!
A version of this article appeared October 8, 2013, on page A1 in the U.S.
edition of The Wall Street Journal, with the headline: Meltdowns Hobble NSA
Data Center.
Meltdowns Hobble NSA Data Center
Investigators Stumped by What's Causing Power Surges That Destroy Equipment
By
SIOBHAN GORMAN
Chronic electrical surges at the massive new data-storage facility central to the National Security Agency's spying operation have destroyed hundreds of thousands of dollars worth of machinery and delayed the center's opening for a year, according to project documents and current and former officials.
There have been 10 meltdowns in the past 13 months that have prevented the NSA from using computers at its new Utah data-storage center, slated to be the spy agency's largest, according to project documents reviewed by The Wall Street Journal.
One project official described the electrical troubles—so-called arc fault failures—as "a flash of lightning inside a 2-foot box." These failures create fiery explosions, melt metal and cause circuits to fail, the official said.
The causes remain under investigation, and there is disagreement whether proposed fixes will work, according to officials and project documents. One Utah project official said the NSA planned this week to turn on some of its computers there.
NSA spokeswoman Vanee Vines acknowledged problems but said "the failures that occurred during testing have been mitigated. A project of this magnitude requires stringent management, oversight, and testing before the government accepts any building."
The Utah facility, one of the Pentagon's biggest U.S. construction projects, has become a symbol of the spy agency's surveillance prowess, which gained broad attention in the wake of leaks from NSA contractor Edward Snowden. It spans more than one-million square feet, with construction costs pegged at $1.4 billion—not counting the Cray supercomputers that will reside there.
Exactly how much data the NSA will be able to store there is classified. Engineers on the project believe the capacity is bigger than Google's largest data center. Estimates are in a range difficult to imagine but outside experts believe it will keep exabytes or zettabytes of data. An exabyte is roughly 100,000 times the size of the printed material in the Library of Congress; a zettabyte is 1,000 times larger.
But without a reliable electrical system to run computers and keep them cool, the NSA's global surveillance data systems can't function. The NSA chose Bluffdale, Utah, to house the data center largely because of the abundance of cheap electricity. It continuously uses 65 megawatts, which could power a small city of at least 20,000, at a cost of more than $1 million a month, according to project officials and documents.
Utah is the largest of several new NSA data centers, including a nearly $900 million facility at its Fort Meade, Md., headquarters and a smaller one in San Antonio. The first of four data facilities at the Utah center was originally scheduled to open in October 2012, according to project documents.
In the wake of the Snowden leaks, the NSA has been criticized for its expansive domestic operations. Through court orders, the NSA collects the phone records of nearly all Americans and has built a system with telecommunications companies that provides coverage of roughly 75% of Internet communications in the U.S.
In another program called Prism, companies including Google, Microsoft, Facebook and Yahoo are under court orders to provide the NSA with account information. The agency said it legally sifts through the collected data to advance its foreign intelligence investigations.
The data-center delays show that the NSA's ability to use its powerful capabilities is undercut by logistical headaches. Documents and interviews paint a picture of a project that cut corners to speed building.
Backup generators have failed numerous tests, according to project documents, and officials disagree about whether the cause is understood. There are also disagreements among government officials and contractors over the adequacy of the electrical control systems, a project official said, and the cooling systems also remain untested.
The Army Corps of Engineers is overseeing the data center's construction. Chief of Construction Operations, Norbert Suter said, "the cause of the electrical issues was identified by the team, and is currently being corrected by the contractor." He said the Corps would ensure the center is "completely reliable" before handing it over to the NSA.
But another government assessment concluded the contractor's proposed solutions fall short and the causes of eight of the failures haven't been conclusively determined. "We did not find any indication that the proposed equipment modification measures will be effective in preventing future incidents," said a report last week by special investigators from the Army Corps of Engineers known as a Tiger Team.
The architectural firm KlingStubbins designed the electrical system. The firm is a subcontractor to a joint venture of three companies: Balfour Beatty Construction, DPR Construction and Big-D Construction Corp. A KlingStubbins official referred questions to the Army Corps of Engineers.
The joint venture said in a statement it expected to submit a report on the problems within 10 days: "Problems were discovered with certain parts of the unique and highly complex electrical system. The causes of those problems have been determined and a permanent fix is being implemented."
The first arc fault failure at the Utah plant was on Aug. 9, 2012, according to project documents. Since then, the center has had nine more failures, most recently on Sept. 25. Each incident caused as much as $100,000 in damage, according to a project official.
It took six months for investigators to determine the causes of two of the failures. In the months that followed, the contractors employed more than 30 independent experts that conducted 160 tests over 50,000 man-hours, according to project documents.
This summer, the Army Corps of Engineers dispatched its Tiger Team, officials said. In an initial report, the team said the cause of the failures remained unknown in all but two instances.
The team said the government has incomplete information about the design of the electrical system that could pose new problems if settings need to change on circuit breakers. The report concluded that efforts to "fast track" the Utah project bypassed regular quality controls in design and construction.
Contractors have started installing devices that insulate the power system from a failure and would reduce damage to the electrical machinery. But the fix wouldn't prevent the failures, according to project documents and current and former officials.
Contractor representatives wrote last month to NSA officials to acknowledge the failures and describe their plan to ensure there is reliable electricity for computers. The representatives said they didn't know the true source of the failures but proposed remedies they believed would work. With those measures and others in place, they said, they had "high confidence that the electrical systems will perform as required by the contract."
A couple of weeks later, on Sept. 23, the contractors reported they had uncovered the "root cause" of the electrical failures, citing a "consensus" among 30 investigators, which didn't include government officials. Their proposed solution was the same device they had already begun installing.
The Army Corps of Engineer's Tiger Team said the contractor's explanations were unproven. The causes of the incidents "are not yet sufficiently understood to ensure that [the NSA] can expect to avoid these incidents in the future," their report said.
Write to Siobhan Gorman at siobhan.gorman@wsj.com
Meltdowns Hobble NSA Data Center
Investigators Stumped by What's Causing Power Surges That Destroy Equipment
By
SIOBHAN GORMAN
Chronic electrical surges at the massive new data-storage facility central to the National Security Agency's spying operation have destroyed hundreds of thousands of dollars worth of machinery and delayed the center's opening for a year, according to project documents and current and former officials.
There have been 10 meltdowns in the past 13 months that have prevented the NSA from using computers at its new Utah data-storage center, slated to be the spy agency's largest, according to project documents reviewed by The Wall Street Journal.
One project official described the electrical troubles—so-called arc fault failures—as "a flash of lightning inside a 2-foot box." These failures create fiery explosions, melt metal and cause circuits to fail, the official said.
The causes remain under investigation, and there is disagreement whether proposed fixes will work, according to officials and project documents. One Utah project official said the NSA planned this week to turn on some of its computers there.
NSA spokeswoman Vanee Vines acknowledged problems but said "the failures that occurred during testing have been mitigated. A project of this magnitude requires stringent management, oversight, and testing before the government accepts any building."
The Utah facility, one of the Pentagon's biggest U.S. construction projects, has become a symbol of the spy agency's surveillance prowess, which gained broad attention in the wake of leaks from NSA contractor Edward Snowden. It spans more than one-million square feet, with construction costs pegged at $1.4 billion—not counting the Cray supercomputers that will reside there.
Exactly how much data the NSA will be able to store there is classified. Engineers on the project believe the capacity is bigger than Google's largest data center. Estimates are in a range difficult to imagine but outside experts believe it will keep exabytes or zettabytes of data. An exabyte is roughly 100,000 times the size of the printed material in the Library of Congress; a zettabyte is 1,000 times larger.
But without a reliable electrical system to run computers and keep them cool, the NSA's global surveillance data systems can't function. The NSA chose Bluffdale, Utah, to house the data center largely because of the abundance of cheap electricity. It continuously uses 65 megawatts, which could power a small city of at least 20,000, at a cost of more than $1 million a month, according to project officials and documents.
Utah is the largest of several new NSA data centers, including a nearly $900 million facility at its Fort Meade, Md., headquarters and a smaller one in San Antonio. The first of four data facilities at the Utah center was originally scheduled to open in October 2012, according to project documents.
In the wake of the Snowden leaks, the NSA has been criticized for its expansive domestic operations. Through court orders, the NSA collects the phone records of nearly all Americans and has built a system with telecommunications companies that provides coverage of roughly 75% of Internet communications in the U.S.
In another program called Prism, companies including Google, Microsoft, Facebook and Yahoo are under court orders to provide the NSA with account information. The agency said it legally sifts through the collected data to advance its foreign intelligence investigations.
The data-center delays show that the NSA's ability to use its powerful capabilities is undercut by logistical headaches. Documents and interviews paint a picture of a project that cut corners to speed building.
Backup generators have failed numerous tests, according to project documents, and officials disagree about whether the cause is understood. There are also disagreements among government officials and contractors over the adequacy of the electrical control systems, a project official said, and the cooling systems also remain untested.
The Army Corps of Engineers is overseeing the data center's construction. Chief of Construction Operations, Norbert Suter said, "the cause of the electrical issues was identified by the team, and is currently being corrected by the contractor." He said the Corps would ensure the center is "completely reliable" before handing it over to the NSA.
But another government assessment concluded the contractor's proposed solutions fall short and the causes of eight of the failures haven't been conclusively determined. "We did not find any indication that the proposed equipment modification measures will be effective in preventing future incidents," said a report last week by special investigators from the Army Corps of Engineers known as a Tiger Team.
The architectural firm KlingStubbins designed the electrical system. The firm is a subcontractor to a joint venture of three companies: Balfour Beatty Construction, DPR Construction and Big-D Construction Corp. A KlingStubbins official referred questions to the Army Corps of Engineers.
The joint venture said in a statement it expected to submit a report on the problems within 10 days: "Problems were discovered with certain parts of the unique and highly complex electrical system. The causes of those problems have been determined and a permanent fix is being implemented."
The first arc fault failure at the Utah plant was on Aug. 9, 2012, according to project documents. Since then, the center has had nine more failures, most recently on Sept. 25. Each incident caused as much as $100,000 in damage, according to a project official.
It took six months for investigators to determine the causes of two of the failures. In the months that followed, the contractors employed more than 30 independent experts that conducted 160 tests over 50,000 man-hours, according to project documents.
This summer, the Army Corps of Engineers dispatched its Tiger Team, officials said. In an initial report, the team said the cause of the failures remained unknown in all but two instances.
The team said the government has incomplete information about the design of the electrical system that could pose new problems if settings need to change on circuit breakers. The report concluded that efforts to "fast track" the Utah project bypassed regular quality controls in design and construction.
Contractors have started installing devices that insulate the power system from a failure and would reduce damage to the electrical machinery. But the fix wouldn't prevent the failures, according to project documents and current and former officials.
Contractor representatives wrote last month to NSA officials to acknowledge the failures and describe their plan to ensure there is reliable electricity for computers. The representatives said they didn't know the true source of the failures but proposed remedies they believed would work. With those measures and others in place, they said, they had "high confidence that the electrical systems will perform as required by the contract."
A couple of weeks later, on Sept. 23, the contractors reported they had uncovered the "root cause" of the electrical failures, citing a "consensus" among 30 investigators, which didn't include government officials. Their proposed solution was the same device they had already begun installing.
The Army Corps of Engineer's Tiger Team said the contractor's explanations were unproven. The causes of the incidents "are not yet sufficiently understood to ensure that [the NSA] can expect to avoid these incidents in the future," their report said.
Write to Siobhan Gorman at siobhan.gorman@wsj.com
Sunday, October 6, 2013
NSA tracks Google ads to find Tor users.........................
Packet Staining
Related:
NSA tracks Google ads to find Tor users:
http://news.cnet.com/8301-1009_3-57606178-83/nsa-tracks-google-ads-to-find-tor-users/
GCHQ packet staining of Tor users:
http://cryptome.org/2013/10/gchq-mullenize.pdf
http://prezi.com/p5et9yawg2c6/ip-packet-staining/
http://tools.ietf.org/html/draft-macaulay-6man-packet-stain-00
Versions: 00 01
Related previous work:
INFORMATIONAL
Related:
NSA tracks Google ads to find Tor users:
http://news.cnet.com/8301-1009_3-57606178-83/nsa-tracks-google-ads-to-find-tor-users/
GCHQ packet staining of Tor users:
http://cryptome.org/2013/10/gchq-mullenize.pdf
http://prezi.com/p5et9yawg2c6/ip-packet-staining/
http://tools.ietf.org/html/draft-macaulay-6man-packet-stain-00
Versions: 00 01
6man Working Group T. Macaulay Internet-Draft Bell Canada Intended status: Standards Track February 14, 2012 Expires: August 17, 2012IPv6 packet staining
draft-macaulay-6man-packet-stain-00
Abstract This document specifies the application of security staining on an IPv6 datagrams and the minimum requirements for IPv6 nodes staining flows, IPv6 nodes forwarding stained packets and interpreting stains on flows. The usage of the packet staining destination option enables proactive delivery of security intelligence to IPv6 nodes such as firewalls and intrusion prevention systems, and end-points such servers, workstations, mobile and smart devices and an infinite array of as- yet-to-be-invented sensors and controllers. Status of this Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on August 17, 2012. Copyright Notice Copyright (c) 2012 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents Macaulay Expires August 17, 2012 [Page 1]
Internet-Draft IPv6 Packet Staining February 2012 carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Conventions used in this document . . . . . . . . . . . . . . 3 3. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3.1. Packet Staining Benefits . . . . . . . . . . . . . . . . . 4 3.2. Implementation and support models . . . . . . . . . . . . 5 3.3. Use cases . . . . . . . . . . . . . . . . . . . . . . . . 5 4. Requirements for staining IPv6 packets . . . . . . . . . . . . 6 5. Packet Stain Destination Option (PSDO) . . . . . . . . . . . . 7 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 8 7. Security Considerations . . . . . . . . . . . . . . . . . . . 8 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 9 9. Normative References . . . . . . . . . . . . . . . . . . . . . 9 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 10 Macaulay Expires August 17, 2012 [Page 2]
Internet-Draft IPv6 Packet Staining February 20121. Introduction
From the viewpoint of the network layer, a flow is a sequence of packets sent from a particular source to a particular unicast, anycast, or multicast destination. From an upper layer viewpoint, a flow could consist of all packets in one direction of a specific transport connection or media stream. However, a flow is not necessarily 1:1 mapped to a transport connection. Traditionally, flow classifiers have been based on the 5-tuple of the source and destination addresses, ports, and the transport protocol type. However, as the growth of internetworked devices continues under IPv6, security issues associated with the reputation of the source of flows are becoming a critical criterion associated with the trust of the data payloads and the security of the destination end- points and the networks on which they reside. The usage of security reputational intelligence associated with the source address field and possibly the port and protocol [REF1] enables packet-by-packet IPv6 security classification, where the IPv6 header extensions in the form of Destination Options may be used to stain each packet with security reputation information such that the network routing is unaffected, but intermediate security nodes and endpoint devices can apply policy decisions about incoming information flows without the requirement to assemble and treat payloads at higher levels of the stack. IPv6 packet staining support consists of labeling datagrams with security reputation information through the addition of an IPv6 destination option in the packet header by packet manipulation devices (PMDs) in the carrier or enterprise network. This destination option may be read by in-line security nodes upstream from the packet destination, as well as by the destination nodes themselves.2. Conventions used in this document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].3. Background
Internet based threats in the form of both malicious software and the agents that control this software (organized crime, spys, hackitivits) have surpassed the abilities of signature-based security Macaulay Expires August 17, 2012 [Page 3]
Internet-Draft IPv6 Packet Staining February 2012 systems; whether they be on the enterprise perimeter, within the corporate network, on the endpoint point or in-the-cloud (internet- based service). Additionally, the sensitivity of IP network continues to grow as new generation of smart devices is appearing on the networks in the form of broadband mobile devices, legacy industrial control devices, and very low-power sensors. This diverse collections of IP-based assets is coming to be known as the Internet of Things (IOT). In response to the accelerating threats, the security vendor community have integrated their products with proprietary forms of security reputation intelligence. This intelligence is about IP addresses and domains which have been observed engaged in attack- behaviours such as inappropriate messaging and traffic volumes, domain management, Botnet command-and-control channel exchanges and other indicators of either compromise or malicious intent. [REF 1] IP address may also end up on a security reputation list if they are identified as compromised through vendor-specific signature-based processes. Security reputation intelligence from vendors is typically made available to perimeter and end-point products through proprietary, internet-based queries to vendor information bases. This system of using proactive, security reputation intelligence has many benefits, but also several weakness and scaling challenges. Specifically, existing intelligence systems are: 1. subject to direct attack from the internet on distribution points, for instance 2. are proprietary to vendor devices 3. require fat-clients consuming both bandwidth and CPU, and 4. introduces flow latency while queries are sent, received and processed 5. introduces intelligence latency as reputation lists will be inevitably cached and only periodically refreshed given the number and range of vendor-specific processing elements3.1. Packet Staining Benefits
In contrast to the challenges of current security reputation intelligence systems, packet staining has the following strengths 1. packet staining can occur transparently in the network, presenting no attack surface 2. packet staining uses standardized, public domain IPv6 capabilities 3. security rules can be easily applied in hardware or firmware 4. reading packet stains introduces little to no latency 5. near-real-time threat intelligence distribution systems can be implemented can be implemented out of band in PMDs using a standardized packet staining method allowing multiple Macaulay Expires August 17, 2012 [Page 4]
Internet-Draft IPv6 Packet Staining February 2012 intelligence sources (vendor sources) to be aggregated and applied in an agnostic (cross-vendor) manner.3.2. Implementation and support models
Packet staining may be accomplished by different entities including carriers, enterprises and third-party value-added service providers. Carriers or service providers may elect to implement staining centres at strategic locations in the network to provide value-added services on a subscription basis. Under this model, subscribers to a security staining service would see their traffic directed through a staining centre where Destination Options are added to the IPv6 headers and IPv4 traffic is encapsulated within IPv6 tunnels, with stained headers. Carriers or service providers may elect to stain all IPv6 traffic entering their network, and allow subscribers to process the stains at their own discretion. If such upstream, network-based staining services are inappropriate or unavailable, Enterprise data centre managers / cloud computing service providers may elect to deploy IPv6 staining at the perimeter into the internal network, tunnelling all IPv4 traffic, and allow data centre/cloud service users to process stains at their discretion. Enterprise may wish to deploy IPv6 on internal networks, and stain all internal traffic whereby security nodes and end-points may apply corporate security policy related to reputation.3.3. Use cases
The following are example use-cases for a security technique based upon a packet staining system. Organization Perimeter Use-case Traffic to a subscriber is routed through a PMD in the carrier network configured to stain (apply Destination Options extensions) all packets to the subscriber (TM)s IP-range, which have entries in the threat intelligence information base. The PMD accesses the information base from a locally cached file or other method not defined in this draft. Packets from sources not in the information base pass through the PDM unchanged. Packets from sources in the information base have a Destinations Option added to the datagram header. The Destination Options contains reputation from the information base. The format of the destination option is discussed later in this draft. IPv6 perimeter devices such as firewalls, web proxies or security routers on the perimeter of the Macaulay Expires August 17, 2012 [Page 5]
Internet-Draft IPv6 Packet Staining February 2012 subscriber network look for Destination Options on incoming packets with reputation stains. If a stain is found, the perimeter device applies the organization policy associated with the reputation indicated by the stain. For instance, drop the packet, quarantine the packet, issue alarms, or pass the packets and associated flow to specially hardened extra-net authentication systems, or do nothing. IPv4 support Use-case" IPv4 header fields and options are not suitable for packet staining; however, there is a clear security benefit to supporting IPv4 flows. IPv4 traffic to a subscriber is routed through a PMD in the carrier network configured to encapsulate the IPv4 traffic in an IPv6 tunnel. The PMD applies a stain (Destination Options extension) to the IPv6 tunnel as per the Perimeter Use-case above. Subscriber perimeter devices such as firewalls, web proxies or security routers are configured to support both native IPv6 flows and IPv6 tunnels contain legacy IPv4 flows. Perimeter devices look for Destination Options on incoming IPv6 packets with reputation stains. If a stain is found, the perimeter device applies the organization policy associated with the reputation indicated by the stain to the IPv4 packet within the IPv6 tunnel. In this manner IPv4 support may be transparent to end-users and applications. IPv6 end-point use-case" IPv6 end-points may make use of reputation stains by processing Destination Options before engaging in any application level processing. In the case of certain classes of smart device, remote and mobile sensors, reputation stains may be a critical form of security when other mitigations such as signature bases and firewalls are too power and processor intensive to support. URL-specific stains" it is a common occurrence to see large public content portals with millions of users sharing dozens of addresses. Frequently, malicious content will be loaded to such sites. This content represents a very small fraction of the otherwise legitimate content on the site, which may be under the direct control of entirely separate entities . Degrading the reputation of IP addresses used by these large portals based on a very small amount of content is problematic. For such sites, reputation stains should have the ability to include the URL of malicious content, such that the reputation of the only specific portions of these large portals is degraded according to threat evidence, rather than the entire IP address, CIDR block, ASN or domain name.4. Requirements for staining IPv6 packets
Macaulay Expires August 17, 2012 [Page 6]
Internet-Draft IPv6 Packet Staining February 2012 1. The default behaviour of a security node MUST be to leave a packet unchanged (apply no stain). 2. Reputation stains may be inserted or overwritten by security nodes in the path. 3. Reputation stains may not be applied by the sender/source of the packet. 4. The reputation staining mechanism needs to be visible to all stain-aware nodes on the path. 5. The mechanism needs to be able to traverse nodes that do not understand the reputation stains. This is required to ensure that packet-staining can be incrementally deployed over the Internet. 6. The presence of the reputation staining mechanism should not significantly alter the processing of the packet by nodes, unless policy is explicitly configured. This is required to ensure that stained packets do not face any undue delays or drops due to a badly chosen mechanism. 7. A PMD should be able to distinguish a trusted stain from an untrusted stain, through mechanism such as digital signatures or intrinsic trust among network elements. 8. A staining node MAY apply more specific and selective staining services according to subscriptions. Staining nodes SHOULD support different reputation taxonomies to support different subscribers and/or interoperability with other staining entities, and have the ability to stain flows to different subscriber sources according to different semantics.5. Packet Stain Destination Option (PSDO)
The Packet Stain Destination Option (PSDO) is a destination option that can be included in IPv6 datagrams that are inserted by PMDs in order to inform packet staining aware nodes on the path, or endpoints, that the PSDO has an alignment requirement of (none). 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Option Type | Option Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |S|U| Stain Data | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Figure 1: Packet Stain Destination Option Layout Macaulay Expires August 17, 2012 [Page 7]
Internet-Draft IPv6 Packet Staining February 2012 Option Type 8-bit identifier of the type of option. The option identifier for the reputation stain option will be allocated by the IANA. Option Length 8-bit unsigned integer. The length of the option (excluding the Option Type and Option Length fields). S Bit When this bit is set, the reputation stain option has been signed. U Bit When this bit is set, the reputation stain option contains a malicious URL. Stain Data Contains the staining data.6. Acknowledgements
The author wishes to achknowledge the guidance and support of Suresh Krishnan from Ericsson's Montreal lab. The author also wishes to credit Chris Mac-Stoker from NIKSUN for his substantial contributions to the early stages of the packet staining concept.7. Security Considerations
Some implementation may elect to no apply digital signature to reputation stains in the Destination Option, in which case the stain is not protected in any way, even if IPsec authentication [RFC4302] is in use. Therefore an unsigned reputation stain can be forged by an on-path attacker. Implementers are advised that any en-route change to an unsigned security reputation stain value is undetectable. Therefore packet staining use the Destination Options extension without digital signatures requires intrinsic trust among the network elements and the PMD, and the destination node or intervening security nodes such as firewalls or IDS services. For this reason, receiving nodes MAY need to take account of the network from which the stained packet was received. For instance, a multi- homed organization may have some service providers with staining Macaulay Expires August 17, 2012 [Page 8]
Internet-Draft IPv6 Packet Staining February 2012 services and others that do not. A receiving node SHOULD be able to distinguish which source from which stains are expected. A receiving node SHOULD by default ignore any reputation stains from sources (networks or devices) that have not been specifically configured as trusted. The reputation intelligence of IP source addresses, ASNs, CIDR blocks and domains is fundamental to the application of reputation stains within packet headers. Such reputation information can be seeded from a variety of open and closed sources. Poorly managed or compromised intelligence information bases can result in denial of service against legitimate IP addresses, and allow malicious entities to appear trustworthy. Intelligence information bases themselves may be compromised in a variety of ways; for instance the raw information feeds may be corrupted with erroneous information, alternately the intelligence reputation algorithms could be flawed in design or corrupted such that they generate false reputation scores. Therefore seed intelligence SHOULD be sourced and monitored with demonstratable diligence. Similarly, reputation algorithms should be protected from unauthorized change with multi-layered access controls. The value of reputation stains will be directly proportional to the trustworthiness, reliability and reputation of the intelligence source itself. Operators of security nodes SHOULD have defined and auditable methods upon which they select and manage the source of reputation intelligence and the packet staining infrastructure itself.8. IANA Considerations
This document defines a new IPv6 destination option for carrying security reputation packet stains. IANA is requested to assign a new destination option type (TBA1) in the Destination Options registry maintained at http://www.iana.org/assignments/ipv6-parameters 1) Signed Security Reputation Option, 2) Unsigned Security Reputation Option 3) Signed Security Reputation Option with malicious URL 4) Unsigned Security Reputation Option with malicious URL The act bits for this option need to be 10 and the chg bit needs to be 0.9. Normative References
[REF1] Macaulay, T., "Upstream Intelligence: anatomy, architecture, case studies and use-cases.", Information Assurance Newsletter, DOD , Aug to Feburary 2010 to 2011. [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Macaulay Expires August 17, 2012 [Page 9]
Internet-Draft IPv6 Packet Staining February 2012 Requirement Levels", BCP 14, RFC 2119, March 1997. [RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6 (IPv6) Specification", RFC 2460, December 1998. Author's Address Tyson Macaulay Bell Canada 160 Elgin Floor 5 Ottawa, Ontario Canada Email: tyson.macaulay@bell.ca Macaulay Expires August 17, 2012 [Page 10]Html markup produced by rfcmarkup 1.104, available from http://tools.ietf.org/tools/rfcmarkup/
Related previous work:
INFORMATIONAL
Network Working Group S. Bellovin Request for Comments: 3514 AT&T Labs Research Category: Informational 1 April 2003The Security Flag in the IPv4 Header
Status of this Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet Society (2003). All Rights Reserved. Abstract Firewalls, packet filters, intrusion detection systems, and the like often have difficulty distinguishing between packets that have malicious intent and those that are merely unusual. We define a security flag in the IPv4 header as a means of distinguishing the two cases.1. Introduction
Firewalls [CBR03], packet filters, intrusion detection systems, and the like often have difficulty distinguishing between packets that have malicious intent and those that are merely unusual. The problem is that making such determinations is hard. To solve this problem, we define a security flag, known as the "evil" bit, in the IPv4 [RFC791] header. Benign packets have this bit set to 0; those that are used for an attack will have the bit set to 1.1.1. Terminology
The keywords MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT, SHOULD, SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL, when they appear in this document, are to be interpreted as described in [RFC2119].2. Syntax
The high-order bit of the IP fragment offset field is the only unused bit in the IP header. Accordingly, the selection of the bit position is not left to IANA. Bellovin Informational [Page 1]
RFC 3514 The Security Flag in the IPv4 Header 1 April 2003 The bit field is laid out as follows: 0 +-+ |E| +-+ Currently-assigned values are defined as follows: 0x0 If the bit is set to 0, the packet has no evil intent. Hosts, network elements, etc., SHOULD assume that the packet is harmless, and SHOULD NOT take any defensive measures. (We note that this part of the spec is already implemented by many common desktop operating systems.) 0x1 If the bit is set to 1, the packet has evil intent. Secure systems SHOULD try to defend themselves against such packets. Insecure systems MAY chose to crash, be penetrated, etc.3. Setting the Evil Bit
There are a number of ways in which the evil bit may be set. Attack applications may use a suitable API to request that it be set. Systems that do not have other mechanisms MUST provide such an API; attack programs MUST use it. Multi-level insecure operating systems may have special levels for attack programs; the evil bit MUST be set by default on packets emanating from programs running at such levels. However, the system MAY provide an API to allow it to be cleared for non-malicious activity by users who normally engage in attack behavior. Fragments that by themselves are dangerous MUST have the evil bit set. If a packet with the evil bit set is fragmented by an intermediate router and the fragments themselves are not dangerous, the evil bit MUST be cleared in the fragments, and MUST be turned back on in the reassembled packet. Intermediate systems are sometimes used to launder attack connections. Packets to such systems that are intended to be relayed to a target SHOULD have the evil bit set. Some applications hand-craft their own packets. If these packets are part of an attack, the application MUST set the evil bit by itself. In networks protected by firewalls, it is axiomatic that all attackers are on the outside of the firewall. Therefore, hosts inside the firewall MUST NOT set the evil bit on any packets. Bellovin Informational [Page 2]
RFC 3514 The Security Flag in the IPv4 Header 1 April 2003 Because NAT [RFC3022] boxes modify packets, they SHOULD set the evil bit on such packets. "Transparent" http and email proxies SHOULD set the evil bit on their reply packets to the innocent client host. Some hosts scan other hosts in a fashion that can alert intrusion detection systems. If the scanning is part of a benign research project, the evil bit MUST NOT be set. If the scanning per se is innocent, but the ultimate intent is evil and the destination site has such an intrusion detection system, the evil bit SHOULD be set.4. Processing of the Evil Bit
Devices such as firewalls MUST drop all inbound packets that have the evil bit set. Packets with the evil bit off MUST NOT be dropped. Dropped packets SHOULD be noted in the appropriate MIB variable. Intrusion detection systems (IDSs) have a harder problem. Because of their known propensity for false negatives and false positives, IDSs MUST apply a probabilistic correction factor when evaluating the evil bit. If the evil bit is set, a suitable random number generator [RFC1750] must be consulted to determine if the attempt should be logged. Similarly, if the bit is off, another random number generator must be consulted to determine if it should be logged despite the setting. The default probabilities for these tests depends on the type of IDS. Thus, a signature-based IDS would have a low false positive value but a high false negative value. A suitable administrative interface MUST be provided to permit operators to reset these values. Routers that are not intended as as security devices SHOULD NOT examine this bit. This will allow them to pass packets at higher speeds. As outlined earlier, host processing of evil packets is operating- system dependent; however, all hosts MUST react appropriately according to their nature.5. Related Work
Although this document only defines the IPv4 evil bit, there are complementary mechanisms for other forms of evil. We sketch some of those here. For IPv6 [RFC2460], evilness is conveyed by two options. The first, a hop-by-hop option, is used for packets that damage the network, such as DDoS packets. The second, an end-to-end option, is for packets intended to damage destination hosts. In either case, the Bellovin Informational [Page 3]
RFC 3514 The Security Flag in the IPv4 Header 1 April 2003 option contains a 128-bit strength indicator, which says how evil the packet is, and a 128-bit type code that describes the particular type of attack intended. Some link layers, notably those based on optical switching, may bypass routers (and hence firewalls) entirely. Accordingly, some link-layer scheme MUST be used to denote evil. This may involve evil lambdas, evil polarizations, etc. DDoS attack packets are denoted by a special diffserv code point. An application/evil MIME type is defined for Web- or email-carried mischief. Other MIME types can be embedded inside of evil sections; this permit easy encoding of word processing documents with macro viruses, etc.6. IANA Considerations
This document defines the behavior of security elements for the 0x0 and 0x1 values of this bit. Behavior for other values of the bit may be defined only by IETF consensus [RFC2434].7. Security Considerations
Correct functioning of security mechanisms depend critically on the evil bit being set properly. If faulty components do not set the evil bit to 1 when appropriate, firewalls will not be able to do their jobs properly. Similarly, if the bit is set to 1 when it shouldn't be, a denial of service condition may occur.8. References
[CBR03] W.R. Cheswick, S.M. Bellovin, and A.D. Rubin, "Firewalls and Internet Security: Repelling the Wily Hacker", Second Edition, Addison-Wesley, 2003. [RFC791] Postel, J., "Internet Protocol", STD 5, RFC 791, September 1981. [RFC1750] Eastlake, D., 3rd, Crocker, S. and J. Schiller, "Randomness Recommendations for Security", RFC 1750, December 1994. [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. [RFC2434] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA Considerations Section in RFCs", BCP 26, RFC 2434, October 1998. Bellovin Informational [Page 4]
RFC 3514 The Security Flag in the IPv4 Header 1 April 2003 [RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6 (IPv6) Specification", RFC 2460, December 1998. [RFC3022] Srisuresh, P. and K. Egevang, "Traditional IP Network Address Translator (Traditional NAT)", RFC 3022, January 2001.9. Author's Address
Steven M. Bellovin AT&T Labs Research Shannon Laboratory 180 Park Avenue Florham Park, NJ 07932 Phone: +1 973-360-8656 EMail: bellovin@acm.org
Subscribe to:
Posts (Atom)