Sunday morning, Dec. 26, 2004 -- It came without warning. One of the largest earthquakes in recorded history struck in the Indian Ocean, just off the west coast of Sumatra. It created a wave, more than 100 feet high, that swept across the shores of 14 countries.
The toll was horrendous:More than 230,000 killed; millions more lives disrupted. More than $14 billion in humanitarian assistance poured into the region.
The greatest tragedy of all is that hundreds of thousands of lives could have been saved with a little early warning and better planning. It had taken several hours for the killer tsunami to fan out across the ocean. Still, it caught many communities by complete surprise.
Since that disaster, much has been done to improve the tsunami-warning system in the region. One innovation involves using Internet social networking tools to get the word out. But this new system introduced new problems.
In 2007, hackers used an SMS text-reporting system to send a fake tsunami warning to cell phones throughout Indonesia. Last year, hackers distributed another fake tsunami warning via the Twitter account of the Indonesian president's disaster adviser. The false alarms panicked more than a few people. Little wonder, considering Indonesia was one of the nations devastated by the 2004 tsunami.
There is a lesson in the tsunami experience for "risk communications" in the United States. The Department of Homeland Security has dumped its ridiculous color-coded alert system. Good. But Washington still faces the perplexing problem of information assurance: making sure that the new warnings issued are precise and reliable.
As has been demonstrated in Indonesia, the Internet can distribute perfidy, as well as rumors and inaccurate information just as fast and as widely as it does facts.
Social networks often rely on crowd sourcing to filter out the best information from the rest. Online, the "wisdom of the crowds" is supposed to elevate the good stuff from the bad, whether it's users ganging up to rate movies, sushi or the validity of reports.
Crowd sourcing makes sense for many purveyors of online information and services -- like E-bay, where customers rate vendors. But government is not one of them.
In a free society, government communications must be legitimate. When governments issue information or conclusions that turn out to be inaccurate or unreliable, the consequences are far more serious than if the source is just your average tweeter on Twitter.
It's laudable that DHS plans to scrap the color-coded alert system that was little short of stupid. But the deans of DHS still have to make sure, this times, they get it right.
According to a "pre-decisional" draft obtained by the Associated Press, there will be just two kinds of warnings: elevated and imminent. That makes a lot of sense, paralleling the kinds of "warnings" and "watches" used for extreme weather notifications.
The system also plans to use social networking tools like Facebook and Twitter to issue alerts. That makes sense, too. But DHS should give more thought to how it will educate the public on the new procedures. Inevitably, some one will try to spoof the system.
For any risk communications process to work, the alerts issued must be credible, understandable, actionable -- and legitimate. Anything less is a waste of time and effort.
James Jay Carafano is a senior research fellow for national security at the Heritage Foundation.
First appeared in The Washington Examiner