Stark Photo
John Reed Stark

As I noted in a recent post, on June 8, 2016, the SEC, in what one commentator called “the most significant SEC cybersecurity-related action to date,” announced that Morgan Stanley Smith Barney LLC had agreed to pay a $1 million penalty to settle charges that as a result of its alleged failure to adopt written policies and procedures reasonably designed to protect customer data, some customer information was hacked and offered for sale online. In the following guest post, John Reed Stark, President of John Reed Stark Consulting and former Chief of the SEC’s Office of Internet Enforcement, takes a look at the circumstances at the company that led to this enforcement action and reviews the important lessons that can be learned from what happened. A version of this article originally appeared on CybersecurityDocket. I would like to thank John for his willingness to publish his article as a guest post on this site. I welcome guest post submissions from responsible authors on topics of interest to this site’s readers. Please contact me directly if you would like to submit a guest post. Here is John’s guest post.

 

****************************

 

The U.S. Securities and Exchange Commission’s (SEC’s) cybersecurity regulatory and enforcement onslaught continues . . .

 

This time the SEC hit Morgan Stanley with a $1 million penalty for security lapses that enabled a former financial adviser to tap into its computers and take client data home. Likely hacked while in the possession of the financial adviser, some of the client data appeared online between December 2014 and February 2015.

 

There are a slew of important takeaways from the SEC action, especially that cybersecurity failures can, and will, happen to any financial firm. And in this instance, after recognizing its cybersecurity failures, Morgan Stanley did just about everything right. Even better than right – Morgan Stanley actually excelled in its response.

 

BACKGROUND

 

Last week the SEC issued a settled administrative order finding that Morgan Stanley Smith Barney (now rebranded as Morgan Stanley Wealth Management) failed to adopt written policies and procedures reasonably designed to protect customer data.

 

As a result of these failures, from 2011 to 2014, then-employee Galen J. Marsh impermissibly accessed and transferred the data regarding approximately 730,000 accounts to his personal server. A likely third-party hack of Marsh’s personal server resulted in portions of the confidential data being posted on the Internet with offers to sell larger quantities.

 

Morgan Stanley.  The SEC’s charges against Morgan Stanley are pretty straightforward.  According to the SEC:

 

  • The federal securities laws require registered broker-dealers and investment advisers to adopt written policies and procedures reasonably designed to protect customer records and information.
  • Morgan Stanley’s policies and procedures were not reasonable, however, for two internal web applications or “portals” that allowed its employees to access customers’ confidential account information.
  • For these portals, Morgan Stanley did not have effective authorization modules for more than 10 years to restrict employees’ access to customer data based on each employee’s legitimate business need.
  • Morgan Stanley also did not audit or test the relevant authorization modules, nor did it monitor or analyze employees’ access to and use of the portals.
  • Consequently, then-employee Galen J. Marsh downloaded and transferred confidential data to his personal server at home between 2011 and 2014.

 

The SEC’s order finds that Morgan Stanley violated Rule 30(a) of Regulation S-P, also known as the “Safeguards Rule.”  Morgan Stanley agreed to settle the charges without admitting or denying the findings.

 

Galen Marsh. 30-year old Galen Marsh joined Morgan Stanley in 2008 as a sales assistant, later becoming a customer service associate and then in 2014 a financial advisor. In 2011, Marsh apparently discovered that he could access a database despite lacking authorization to do so and found a similar weakness with another database in 2014.

 

Marsh transferred customer data to his personal server by accessing his then personal website, galenmarsh.com, which had a feature that enabled Marsh to transfer data from his Morgan Stanley computer to his personal server. At the time, Morgan Stanley’s Internet filtering software did not prevent employees from accessing such ‘uncategorized’ websites from Morgan computers.

 

In a separate order, Marsh agreed to an industry and penny stock bar with the right to apply for reentry after five years. In a parallel criminal prosecution by the U.S. Attorney’s Office for the Southern District of New York (SDNY), Marsh pled guilty to criminal charges filed last September of one count of unauthorized access to a computer. As announced by the SDNY, Marsh was sentenced later to 36 months of probation and a $600,000 restitution order. To learn Marsh’s side of the story, here is a copy of Marsh’s sentencing memorandum, filed by his attorneys.

 

KEY TAKEAWAYS

 

Morgan Stanley’s Conduct was Exemplary; the Firm Did Everything Right. While Morgan Stanley may have been at fault for the actual incident (because of its system failures regarding data access modules), every firm is going to experience cybersecurity lapses. No firm can boast of perfect cybersecurity, mistakes will always happen, so Morgan Stanley’s response, the key to analyzing any cybersecurity related incident, is what matters most. And the firm’s response grades an A+.  Here is why:

 

According to the New York Times, in mid-December 2014, a posting appeared on the Internet site Pastebin offering six million account records, including passwords and login data for clients of Morgan Stanley.  Two weeks later, a new posting on the information-sharing site offered a teaser of actual records from 1,200 accounts, and provided a link for people interested in purchasing more. The link pointed to a website that sells digital files for virtual currencies like Bitcoin.  In this case, the files were being sold for a more obscure currency, Speedcoin, a virtual currency that seems even more suspicious than Bitcoin.

 

Morgan Stanley Detected the Online Sale of Its Client Data. Reports indicate that Morgan Stanley officials picked up on the posting almost immediately, after it triggered an alert by its routine surveillance of a number of websites that traffic in sensitive information. The offer was quickly taken down the same day after Morgan Stanley discovered the leak.  Very impressive.

 

Morgan Stanley Dispensed with Marsh Quickly and Firmly. In short order, Morgan Stanley traced the breach to Marsh, a financial adviser working out of its New York offices. Marsh, who had been with Morgan Stanley since 2008, was quickly fired and ultimately charged criminally for his theft of Morgan Stanley client data. Very impressive.

 

Morgan Stanley Came Clean. Morgan Stanley quickly announced details, specifically that Marsh took data on about 10 percent of its 3.5 million wealth management customers, including transactional information from customer statements. Morgan Stanley said that Marsh did not take any sensitive passwords or Social Security numbers, and that it had not found any evidence that the breach resulted in any losses to customers.  Very impressive.

 

Morgan Stanley Engaged an Independent Consulting Firm and Law Firm to Investigate the Data Security Incident.  With respect to the entire incident, Morgan Stanley responded swiftly; responded with transparency (especially with its regulators and with law enforcement) and, most importantly, according to a spokesman, conducted an independent investigation, using both an independent legal team and an independent consulting team, to truly understand its failures. Extraordinarily impressive.

 

Strong corporate leaders, like those who populate the c-suite at Morgan Stanley, seek answers from independent and neutral sources of information. Otherwise, risks are not properly exposed and examined, and they become exacerbated rather than assuaged. Remarkably, so many financial firms fail to grasp this critical necessity for independence.

 

Cybersecurity at SEC-registered entities like Morgan Stanley has become a top priority for the SEC inspections group and enforcement division.  Every SEC-registered firm, including Morgan Stanley, should anticipate the SEC’s increasing commitment to regulating cybersecurity and lack of sympathy for any sort of cybersecurity failure.  This means investigating data security incidents above all else, with independence and neutrality, a notion the SEC in particular respects and appreciates.  As an aside, Morgan Stanley should have insisted that the SEC staff include language in the SEC order citing Morgan Stanley’s engagement of an independent consultant and law firm (and other remedial actions); it is an important mitigating factor and worthy of mention.

 

Kudos go to Morgan Stanley for its expert handling of a challenging cybersecurity incident.  Other financial firms should study Morgan Stanley’s reaction and take careful notes – because what Morgan Stanley experienced can, and will, happen to all of them.

 

Cybersecurity Remains an Oxymoron. When my daughter comes home from school with a cold, it is not her fault. No one can protect her from catching a cold; they are inevitable. The same goes for data security incidents.

 

Cyber threats fall broadly into external and insider issues, and the Morgan Stanley matter involved both. The hackers who attacked Marsh represented an “external” threat, which could have been state sponsored — perpetrated by terrorists, military or other companies.   Given the total weight of resources at thee disposal of external threats (such as legions of soldiers), external threats can outgun any company, even one as large, complex and sophisticated as Morgan Stanley.

 

Marsh, on the other hand, represented an “internal” threat, a disgruntled, rogue or dishonest employee exploiting an available cybersecurity loophole. Internal threats stem not only from criminals like Marsh. Companies also face internal threats from careless, slipshod, or otherwise slack employees. Just like corrupt employees, inattentive or disaffected employees exacerbate existing vulnerabilities, lapses or other weaknesses, inevitably introducing errors and policy failures. Like external threats, internal threats can wreak havoc upon any firm, no matter how rigorous it’s oversight.

 

In addition, mistakes, accidents and the continual changing of information technology (IT) infrastructure, create a constant resource-draining demand to rebuild the virtual walls designed to keep intruders out and to fashion new policies and procedures for employees to learn, obey and conform.

 

Given the nature of external threats, internal threats and an ever-changing IT infrastructure, no firm can ever maintain perfect cybersecurity and there exists no silver bullet to mitigate all cybersecurity risk.

 

Remember Jeff Goldblum’s character in the film Jurassic Park, where he portrayed Dr. Ian Malcolm and discusses Chaos Theory and the so-called Butterfly Effect by showing how water drops will stream differently every time a drop is released on a finger.  Dr. Malcolm specializes in ‘Chaos Theory’ and predicts that the Jurassic Park Island will quickly proceed to behave in “unpredictable fashion” and that it was “an accident waiting to happen. ”

 

The same goes for cybersecurity – no matter how skilled and talented an information security team, there will always be flaws, mistakes and mishaps. There are too many variables (especially variables involving people) to believe otherwise.

 

This is the Most Significant SEC Cybersecurity-related Regulatory Action to Date. The Morgan Stanley SEC enforcement action sets itself apart from other similar SEC cybersecurity matters because: 1) the matter involves Morgan Stanley, a large, sophisticated and leading financial firm; 2) a now-terminated Morgan Stanley employee was charged criminally; and 3) the incident was not a data mishap but rather involved an institutional and systemic failure, which required immediate remediation.

 

To provide some perspective, below is a quick round up, in reverse-chronological order, of prior SEC cybersecurity-related matters of similar ilk:

 

The Craig Scott Capital Matter. On April 12, 2016, in a settled administrative proceeding against Craig Scott Capital, LLC, a registered broker-dealer, and its two principals, the SEC alleged violations of Regulation S-P’s requirements that broker-dealers adopt written policies and procedures to protect confidential customer information and records and to keep and maintain copies of all business communications.

 

Specifically, the SEC alleged that the broker-dealer or its principals failed to protect customer information by, among other things:

 

  • Using “personal email addresses to receive thousands of faxes from customers and other third parties. These faxes routinely included sensitive customer records and information, such as customer names, addresses, social security numbers, bank and brokerage account numbers, copies of driver’s licenses and passports, and other customer financial information;”
  • Using personal email addresses for business matters relating to the business; not maintaining and preserving either these faxes or emails; and
  • Not having adequate written supervisory procedures to protect customer information and records.

 

The R.T. Jones Matter. In September 22, 2015, the SEC filed the R.T. Jones administrative action, its first “cybersecurity enforcement action,” resulting from its 2014 and 2015 cybersecurity sweeps.   (Specifically, in April 2014, the SEC announced its first cybersecurity sweep of brokerage and investment advisory examinations in an SEC Risk Alert, which made the unusual and almost unprecedented move of publishing, as a “resource,” the so-called examination module (i.e., questionnaire) that SEC staff planned to serve upon targets of the sweep. About a year after the SEC’s first sweep, the SEC then published a report containing some strong sentiments about cybersecurity. Next, on September 15, 2015, the SEC announced its second sweep of examinations into brokerage and advisory firms’ cybersecurity practices, doubling down on its efforts, and once again providing an extensive examination module as a resource for regulated entities.)

 

The settled enforcement action against R.T. Jones Capital Equities Management charged alleged failures to “establish the required cybersecurity policies and procedures in advance of a breach that compromised the personally identifiable information (PII) of approximately 100,000 individuals, including thousands of the firm’s clients.” For an in-depth analysis of the SEC’s R.T. Jones matter, please visit this “Stark on IR” posting.

 

The Gunn-Allen Matters. On April 7, 2011, the SEC charged three former brokerage executives for failing to protect confidential information about their customers. Specifically, the SEC’s investigation found that while Tampa-based GunnAllen Financial Inc. (‘‘GunnAllen’’) was winding down its business operations, former President Frederick O. Kraus and former National Sales Manager David C. Levine violated customer privacy rules by improperly transferring customer records to another firm. The SEC also found that former Chief Compliance Officer Mark A. Ellis failed to ensure that the firm’s policies and procedures were reasonably de- signed to safeguard confidential customer information.

 

The SEC’s orders found that all three respondents willfully aided and abetted and caused GunnAllen’s violations of Rule 30(a) of Regulation S-P under the Securities Exchange Act of 1934, and that Kraus and Levine willfully aided and abetted the firm’s violations of Rules 7(a) and 10(a) of the same regulation. Without admitting or denying the SEC’s findings, the officials each consented to the entry of an SEC order that censured them and required them to cease and desist from committing or causing any violations or future violations of the provisions charged, and to pay penal- ties ranging from $15,000 to $20,000 each.

 

Eric Bustillo, the head of the SEC’s Miami office stated at the time, ‘‘Brokerage customers should be able to trust that sufficient safeguards are in place to protect their private information from unauthorized access and misuse. Protecting confidential customer information is particularly important when a broker-dealer is winding down operations.’’ Glenn S. Gordon, associate director of the Miami Regional Office, added, ‘‘GunnAllen did not have adequate policies or procedures in place to safeguard client information, ignoring several red flags from security breaches at the firm in prior years.’’

 

With respect to the security breaches cited by Mr. Gordon, the SEC found that, once aware of the breaches, GunnAllen’s CCO failed to direct the firm to: (1) properly assess the risk that these breaches posed to customers; (2) adopt additional written policies and procedures to protect customer information in accordance with the Safeguards Rule; and (3) take remedial steps recommended by employees, such as contacting law enforcement authorities or affected customers.

 

The SEC noted that the data breaches and the firm’s limited response to them highlighted the inadequacy of the firm’s written policies and procedures for safeguarding information, and that in failing to direct the firm to revise or supplement these policies and procedures, the CCO caused the firm to violate the Safeguards Rule.

 

The security breaches cited by the SEC did not involve what IT professionals might consider more sophisticated cyber intrusions (such as an Advanced Persistent Threat or APT, SQL Injection, botnet, malware, or other more ‘‘code-generated’’ cyber-attacks). Rather, the data breaches at GunnAllen were caused by a theft of the laptops of a few registered representatives and a former employee’s unauthorized access of a cur- rent employee’s firm e-mail account.

 

The GunnAllen settlement marked the first time that the SEC assessed financial penalties against individuals charged solely with violations of Regulation S-P and clearly marked the data breach territory as their own. Moreover, the settlement indicates not only that the SEC is willing to hold senior officers of financial institutions individually liable for their role in violations of Regulation S-P but also that the SEC, when assessing Regulation S-P liability, will consider whether an information security policy is sufficiently comprehensive as well as the effect of a firm’s actions upon the privacy rights of customers.

 

The Dante J. DiFrancesco Matter (a FINRA Enforcement Action). Of relevance is  a similar December 17, 2010 matter brought by the Financial Industry Regulatory Authority (FINRA) before its National Adjudicatory Council (NAC), captioned In re Department of Enforcement vs. Dante J. DiFrancesco. In that action, the NAC affirmed a $10,000 fine and 10-day suspension ordered by a FINRA hearing panel in a contested hearing against a broker for his downloading confidential customer information from his firm’s computer system onto a flash drive on his last day of employment and then sharing that information with a new firm.

 

FINRA found the broker’s actions prevented his former firm from giving its customers a reasonable opportunity to opt out of the disclosures, as required by Regulation S-P. FINRA also found the broker’s misconduct caused his new firm to improperly receive non-public personal information about his former firm’s customers.

 

The Commonwealth Equity Matter. The GunnAllen action, though the first of its kind, was not the first time the SEC enforcement division had acted with respect to privacy and information security violations of the Privacy Rule and the Safeguards Rule. For instance, on September 29, 2009, Commonwealth Equity Service LLP, a stock trading firm, similarly settled the SEC’s charges that it had violated the SEC’s Safeguards Rule.

 

Specifically, the firm experienced an information security breach when a perpetrator installed a virus on the firm’s computers and obtained log-in credentials of the firm’s registered representative. The perpetrator used the credentials to access the firm’s customer accounts and place unauthorized securities orders in excess of $500,000.

 

The SEC alleged that the firm violated the Safeguards Rule by: (1) failing to require the firm’s registered representatives to maintain antivirus software on their computers; (2) failing to audit computers to determine whether antivirus software had been installed; (3) failing to implement policies and procedures to appropriately review the firm’s registered representatives’ computer security measures; and (4) failing to implement procedures to track and address information security is- sues. As a result of these failures, the SEC alleged that the firm’s customer information was left vulnerable to unauthorized access. To settle the SEC’s charges, Commonwealth Equity Service paid a penalty of $100,000 and agreed to cease and desist from committing or causing future violations of the Safeguards Rule.

 

The NEXT Financial Group Matter. On August 24, 2007, the SEC enforcement division alleged that NEXT Financial Group (NEXT) violated Regulation S–P by: 1) permitting registered representatives who were leaving the firm to take clients’ personal financial information; and 2) aided and abetted other firms’ violations of Regulation S-P by encouraging and assisting newly recruited, registered representatives to bring non-public personal information about their former firm’s clients to NEXT. In 2008, an administrative law judge issued an initial decision that imposed a $125,000 fine on NEXT.

 

The instances center around the methods used by NEXT’s ‘‘transition team’’ to help bring on new registered representatives. According to the SEC Administrative Order, the transition team assisted new recruits by ‘‘pre-populating’’ account transfer documents such as automated customer account transfer forms (ACATS), new account information forms, change of broker- dealer letters, and mailing labels. The team provided all recruits with a sample Excel spreadsheet showing what types of customer information to provide in order to start pre-population.

 

The SEC Administrative Order also stated that when a registered representative left NEXT, he was allowed to take copies of all his customer files and documents, which included non-public personal information, and to download similar non-public personal information from NEXT’s computer system. Further, the Administrative Order asserted that NEXT’s transition team sometimes used recruits’ user IDs and passwords not only to access recruits’ current b/d computer system but also to download non-public personal information used to pre-populate documents, and to access various mutual fund and annuity company websites to extract customer in- formation. In at least one instance, NEXT received non- public client information from a recruit who later decided not to join the firm, yet the customer information was retained in NEXT’s computer system.

 

The Sydney Mondschein Matter. Somewhat akin to the Next Financial Group matter, and worthy of mention, is the Sydney Mondschein SEC federal action and related administrative proceeding involving a violation of Regulation S-P by a broker who misappropriated from his employer personal identifying information for his own profit.

 

The Mondschein actions involved allegations by the SEC that between December 2002 and August 2005, Sydney Mondschein, a brokerage firm, was found liable for its registered representative’s activities in violation of Regulation S-P by failing to disclose to customers that he intended to, and did sell their personal information to insurance agents. Specifically, the SEC’s complaint charged that Mondschein reaped illegal profits by secretly selling the names and other confidential personal information of over 500 of his customers to six different insurance agents.

 

The final judgment, entered on April 14, 2008, permanently enjoined Mondschein from violating Section 10(b) of the 1934 Exchange Act and Rule 10b-5 there- under, and from aiding and abetting any violations of Rules 4(a), 5(a), and 10(a)(1) of Regulation S-P. The final judgment also ordered Mondschein to disgorge all of his ill-gotten gains of approximately $53,000, plus prejudgment interest of approximately $4,680, and to pay a penalty of $45,000.

 

In the separate related administrative action, the SEC issued an order barring Mondschein from associating with any broker or dealer, with a right to reapply after five years. In the district court action, the SEC’s complaint alleged that Mondschein, a former Antioch, California stockbroker, sold his customers’ personal information as sales ‘‘leads’’ solely to enable insurance agents to solicit these customers, many of whom had al- ready purchased fixed or equity-indexed annuity products, to buy additional annuity products.

 

The SEC’s Use of the Safeguards Rule Remains the Cornerstone of the SEC’s Cybersecurity Regulatory Framework.   Violation of the SEC’s non-scienter based Safeguards Rule has become the standard minimum charge in SEC cybersecurity-related enforcement actions against financial firms, just like violation of the SEC’s non-scienter based SEC internal controls rules has become the standard minimum SEC charge in SEC accounting-related enforcement actions against public companies.  A brief history of the SEC’s and its charging of public companies with internal control failure illustrates this point:

 

The SEC and Internal Controls Failures. When the SEC charges a public company with a financial reporting violation, whether it involves fraud, misappropriation, breach of fiduciary duty or otherwise, the SEC violation underlying all allegations is the failure of internal controls. “Internal controls” are the procedures and practices instituted by a company to manage risk, conduct business, protect assets, and ensure that its practices comply with the law and company policy.  A particularly important species of internal controls are “internal controls over financial reporting,” addressed in Section 13 of the Securities Exchange Act of 1934.

 

Along these lines, the SEC generally requires issuers to keep books, records and accounts that accurately reflect the company’s transactions and maintain internal accounting controls to ensure that company transactions are recorded in accordance with management’s authorization and in conformity with Generally Accepted Accounting Principles.

 

In addition, Sarbanes-Oxley added new requirements for management and auditors related to internal controls. Sarbanes-Oxley Section 302 requires management to certify to, among other items, their responsibility for maintaining internal controls; disclosing significant deficiencies and material weaknesses in internal controls to auditors and audit committees; and disclosing any significant changes in internal controls. Sarbanes-Oxley Section 404 (as amended by the Dodd-Frank Act) also requires, among other things, that: (i) company management assess and report on the effectiveness of the company’s internal control over its financial reporting, and (ii) the company’s independent auditors verify management’s disclosures.

 

As the SEC’s enforcement efforts concerning financial accounting practices at public companies have intensified, failure of internal controls has concomitantly evolved into an almost boiler-plate, catch-all SEC charge (and charge of last resort) for any sort of accounting fraud at a public corporation, even when there is no fraud or other intentionally unlawful conduct.

 

The same now goes for cybersecurity violations at SEC regulated entities – except instead of using an internal controls failure as its fundamental jurisdictional and prosecutorial basis, the SEC uses Regulation S-P and the Safeguards Rule.

 

The Safeguards Rule. Since its promulgation, the SEC has not brought many enforcement actions for violations of the Safeguards Rule, but the SEC has now begun to step up its cybersecurity efforts considerably, including launching its September 15, 2015 and April 15, 2014 cybersecurity examination sweeps. Given the epidemic of cyber-attacks, the SEC will continue to intensify its cybersecurity enforcement efforts.

 

Like the SEC’s internal controls compliance requirements for public companies, Rule 30(a) of Regulation S-P, commonly referred to as the “Safeguards Rule” has a broad regulatory bandwidth.

 

The Safeguards Rule requires broker-dealers and SEC-registered investment advisers to adopt written policies and procedures reasonably designed to protect customer information against unauthorized access and use.  Specifically, the Safeguards Rule requires every broker-dealer and investment adviser registered with the SEC to adopt written policies and procedures that address administrative, technical and physical for the protection of customer records and information, and that are reasonably designed to:

 

  • Ensure the security and confidentiality of customer records and information;
  • Protect against any anticipated threats or hazards to the security or integrity of customer re- cords and information; and
  • Protect against unauthorized access to or use of customer records or information that could result in substantial harm or inconvenience to any customer.

 

Regulation S-P was created on March 2, 2000, when the SEC issued a notice of proposed rulemaking.  On June 22, 2000, the SEC adopted the final rule, entitled, “Privacy of Consumer Financial Information.  The Safeguards Rule contains the privacy rules promulgated by the SEC under section 504, Subtitle A of Title V of the Gramm-Leach-Bliley Act (GLB).

 

Specifically, section 504 requires the SEC and other federal agencies to adopt rules implementing notice requirements and restrictions on a financial institution’s ability to disclose non-public personal information about consumers. Under GLB, a financial institution must provide its customers with a notice of its privacy policies and practices, and must not disclose non-public personal information about a consumer to unaffiliated third parties unless the institution provides certain information to the consumer and the consumer has not elected to opt out of the disclosure.

 

The SEC’s Administrative Forum Remains the Preferred Venue for Cybersecurity Failures. As expected, the SEC selected an administrative courtroom in its own backyard as its forum for the Morgan Stanley matter, rather than a federal courtroom — even though there was a parallel criminal action against Marsh filed in federal court.

 

The SEC’s opting to charge Morgan Stanley in an administrative court makes sense because the SEC has historically charged technical securities law violations committed by SEC regulated entities in its own specialized and uniquely capable administrative forum (as opposed to more generic fraud violations, which the SEC historically charged in federal court, though the SEC has recently (with much controversy) begun charging those matters in their own administrative forums too.)

 

In future SEC enforcement matters involving cybersecurity failures, the SEC will likely continue to file their charges administratively, especially if the alleged violations pertain to an SEC regulated entity violating an arguably opaque and subjective regulation like as the Safeguards Rule.

 

Victims are Presumed, Not Required. Like most other SEC cybersecurity matters, there is still no specific harm to any investor alleged or otherwise discussed in the SEC charges.

 

Not surprisingly, breached investors (i.e. customers whose data may have been exfiltrated or otherwise compromised) need not suffer any harm in order for the SEC to bring an enforcement action.  Just like any of the recent data breaches making headlines, in the Morgan Stanley matter: 1) no one could identify the actual perpetrator of the cyber-attack upon Marsh’s computer; and 2) actual harm to customers is presumed (which is always a bit of a logical leap, but that is a subject for another article).

 

The SEC, like every other regulator and law enforcement agency, relies on the ethereal axiom that some victim exists somewhere who has experienced some sort of damage, perhaps an identity theft or related computer crime.

 

Morgan Stanley was not the Victim in This Case. The treatment of cyber-attack victims usually is less about understanding and sympathy, and more about anger, vilification, suspicion and finger-pointing. The world of incident response is an upside-down one: Rather than being treated like the victim of a crime perpetrated by others, a company experiencing a cyber-attack is often treated like the criminal, becoming the defendant in federal and state enforcement actions, class actions and a litany of other costly and crippling proceedings.

 

But such is not the case here. Although Marsh was clearly the most culpable among the players involved in the Morgan Stanley matter, Morgan Stanley bares some of the blame as well.

 

While Morgan Stanley clearly took rapid, methodical, independent, objective and remedial action to mitigate the breach once it was detected, the alleged breakdown in the firm’s cyber controls was what allowed for the attack in the first place.   In this instance, Morgan Stanley was not the victim of any kind of attack — such as an Advanced Persistent Threat or APT attack, spear phishing scheme or SQL injection — they were rather the enabler of an attacker to exfiltrate client data. Because of a programming flaw in Morgan Stanley’s portals where the data was stored, Marsh accessed this data thousands of times over a three-year period.

 

Control, Monitor and Limit Employee Access to Data. The SEC was clear that Morgan Stanley failed to: (1) audit and/or test the effectiveness of the authorization modules for the portals; and (2) monitor employee access to and use of the portals.  In this regard, perhaps the most serious allegation was that Morgan Stanley had not conducted any auditing or testing of the authorization modules of the relevant portals over the 10 years that they were in use.

 

Unlimited or loosely regulated access to firm-wide systems creates opportunities for internal misbehavior (like that of Marsh) or external threat exploitation (like an APT attack or SQL Injection). Corralling, restricting and surveiling universal access to systems and data requires constant vigilance.  Segregating data access by job classification is important, requiring strict policies, vigilant enforcement of those policies and meticulous attention to turnover and promotions.  Employees, no matter how honest and loyal, can be tempted to access company data beyond their specific needs or requirements.  When data is not restricted by technological implemented authorization modules, there will always exist a danger of inappropriate, unlawful or nefarious access.

 

Control, Monitor and Limit Administrative Access to Systems. Morgan Stanley’s data security failure should also remind companies of the need not only to control, monitor and limit employee access to data — but also to control, monitor and limit administrative access to systems.

 

An administrator account is a user account that allows the administrator (or “admin”) to make changes that will affect other users.  Admins can change security settings, install software and hardware, and access all files on a computer, mobile device, tablet or network. Admins can also make changes to other user accounts. Cyber-attackers prey in particular on admin passwords (to attain command and control of a system), especially those rarely used, which can fly under the radar. Inadvertently keeping old admin passwords or assigning too many admin passwords can lead to massive data breaches and is an easily avoidable vulnerability.

 

Yet so many firms fail to have policies, procedures and technologies in place for admin activities – even as simple as to confirm and audit the shutdown of stale admin accounts (e.g. held by departed employees) or enforce continual and powerful password requirements. The use of admin passwords and admin rights should be tightly controlled, monitored and documented.

 

A Note on Passwords. The Morgan Stanley matter also serves as a reminder that companies should have written and technologically enforced policies mandating that passwords are changed at certain intervals throughout the year with specified configurations and characteristics.

 

Passwords are the first line of defense in any company and should be regularly audited for compliance with the password policy. Weak or predictable passwords make it very easy for an attacker to access external email portals or VPNs.

 

Weak passwords can even raise the ire of federal regulators. For instance, at an April 2016 cybersecurity conference, an assistant director at the FTC’s Bureau of Consumer Protection gave several examples of the kind of corporate behavior that justifies regulators’ attention, including a case where the FTC initiated civil charges against a computer hardware maker called ASUSTeK Computer (ASUS), whose default login for every router had “admin” as the username and “admin” as the password. The FTC believed that the security flaws, including the weak passwords, allowed hackers to gain access to ASUS routers in consumers’ homes and initiated a federal civil enforcement action, which ASUS ultimately settled via a consent order with the FTC.

 

The Importance of Remediation.   While the SEC’s administrative order does not detail the specific remedial steps taken by Morgan Stanley, the order does reference that because of Morgan Stanley’s remedial actions, the penalty is less. Morgan Stanley undertook a careful, objective, transparent and independent internal investigation concerning Marsh, expeditiously reported its findings to the SEC and to law enforcement – and has now remediated the issue.  The penalty could have been far worse had Morgan Stanley not promptly remediated the data access module issue.

 

The Importance of Penetration Testing. This matter should serve as an reminder for all companies concerning the importance of hiring expert and thorough pen testers, who take a meticulous yet holistic approach to their analysis.

 

Just as a CEO has an annual physical checkup by a physician, a company should undergo a risk and security assessment of its inner cybersecurity workings. Implementing cybersecurity solutions requires a comprehensive risk assessment to determine defense capabilities and weaknesses and ensure the wise application of resources.  What works best is a disciplined yet flexible methodology that incorporates a company’s organizational culture, operational requirements and tolerance for risk, and then balances that against current technological threats and risk.  Since data breaches are inevitable, a proper risk and security assessment quantifies risk, develops meaningful risk metrics and conveys the effectiveness of risk mitigation options in clear and concise terms.

 

Morgan Stanley likely engages in a variety of routine and annual testing of its systems, yet its pen testers failed to discover Marsh’s unlimited access and Morgan Stanley’s failure to use appropriate authorization modules to limit access to the data.   This sort of testing oversight can happen; consulting firms and cybersecurity shops market a myriad of services (such as pen testing, risk and security assessments, data security audits, application security evaluations, code reviews, etc.) and the competency and methodology of pen testers varies wildly.  Firms should be sure to engage a strong pen testing firm.

 

What Makes a Good Pen Tester. A company’s pen tester should have substantial technological abilities, including expertise in testing web applications, mobile applications and devices, software products, third-party service providers, cloud solutions and IT infrastructure.

 

One mark of a good pen tester is to be a thought leader in the information security community – authoring theoretical publications, giving peer conference presentations, contributing to open source projects, writing blogs or publishing vulnerabilities. It also helps if a pen tester has so-called blue team experience, (that is, he or she has managed networks or systems or developed applications).

 

Good pen testers mimic the methods used by sophisticated attackers to identify vulnerabilities before they can be exploited. That is best achieved by using specialized, manual testing, not by running automated tools. Automated tools do have a place (it’s a good practice to run them internally looking for low-hanging fruit), but custom tools will typically prove far more effective. No two pen testing engagements are ever the same; even the same vulnerability can vary wildly in different environments, and having a proprietary set of tools evidences a pen tester’s ability to venture off-script and improvise when necessary. Proprietary tools also typically allow for a more detailed explanation of the so-called “kill chain” or path of an attack.

 

Pen testing has no standardization (not like some sort of emissions or DNA test), so company executives should give careful consideration to who should conduct a company’s pen testing and how to best interpret the results. Before conducting any test or assessment, company leaders should make sure IT departments document all cybersecurity policies and procedures, not just to get credit for good behavior and practices, but also because documentation is a beneficial compliance exercise.

 

Common types of pen testing for companies include: an external penetration test or vulnerability scan to assess Internet-facing computers, including firewalls, VPNs and other online gateways; an internal penetration test or vulnerability of a company’s internal network, such as desktops, laptops, servers, printers, VoIP phones and other online devices; a web application assessment to analyze a company’s website security; and social engineering testing to assess the “human firewall” of a company and gauge company staff cybersecurity awareness.

 

In addition, companies should conduct unannounced spear-phishing tests. Spear-phishing tests help determine employee resistance to one of the most common methods of remote compromise. The tests also help gauge the risks associated with permissive egress filters, targeted malware, the establishment of remote command and control channels and the susceptibility to undetected bulk data exfiltration.

 

The Penny Stock Bar Against Marsh. One minor but interesting aspect of the SEC penalty of Marsh is that he received a penny stock bar. This means that Marsh was somehow involved with penny stocks, which is the riskiest of all trading and often associated with fraud and chicanery.

 

Section 15(b)(6)(A)(iii) of the Exchange Act authorizes the SEC to prohibit persons from participating in an offering of penny stock also known as the “Penny Stock Bar.” Section 15(b)(6)(C) defines “person participating in an offering of penny stock” to include, among other things, any person “who engages in activities with a[n] issuer for purposes of the issuance or trading in any penny stock, or inducing or attempting to induce the purchase or sale of any penny stock.”

 

While notoriously vague, the impact of a penny stock bar is that the individual is barred from acting as a promoter, finder, consultant or agent or otherwise engaging in activities with a broker, dealer, or issuer for the purpose of the issuance or trading in any penny stock, or inducing or attempting to induce the purchase or sale of any penny stock. Under the bar, Marsh cannot even own a penny stock.

 

The Need for a Virtual Big Brother. The SEC Order in the Morgan Stanley matter goes so far to state specifically: “Morgan Stanley did not monitor user activity in the [data portals Marsh improperly and unlawfully infiltrated] to identify any unusual or suspicious patterns.”   In other words, the SEC expects SEC regulated entities to implement intelligent and technological surveillance of employee conduct, recognizing that technology now exists, which can monitor the activity of employees and red flag suspicious behavior.

 

This sort of data analytics is becoming more and more popular (and more effective) at corporations, especially at financial firms.  One lesson from the SEC Morgan Stanley order is that companies should consider data analytic and related artificial intelligence technological applications, which might have alerted Morgan Stanley to the individual’s behavior (which took place over a period of three years).

 

For instance, a financial firm employee who trades or is otherwise active in the penny stock market, should trigger enhanced supervision. The penny stock market is historically replete with fraud and populated by con artists, and requires enhanced internal controls; increased supervisory intervention; and a healthy, continuous rigorous skeptical oversight.

 

Additionally, while by no means a panacea, by implementing new and emerging data analytic technologies, a company, especially a financial firm, demonstrates to regulators and shareholders that it takes seriously its cybersecurity-related internal controls.

 

The $1 Million Penalty. Penalty amounts continue to be essentially random at the SEC, and there exists no reasonable calculus that can break down the $1 million penalty against Morgan Stanley.  Like so many other SEC penalties, the $1 million amount probably just “felt right” to the SEC enforcement staff and to the commission, and is as much about current SEC mores and precedent then SEC statutes, rules or regulations.

 

The Appropriate Paradigm for Cybersecurity. Every company can experience a data breach — and probably already has. That is why companies need to shift cybersecurity practices away from prevention and detection and into a paradigm of incident response. Traditional data breach protections do not detect quickly enough, or act nimbly enough, to counter today’s sophisticated and clandestine data breaches.

 

Yet, so many companies remain unwilling to recalibrate cybersecurity into a more effective archetype of response. Because cybersecurity threats have suddenly become so complex, sophisticated, and transnational, companies are struggling to stay current. When a data breach hits the headlines, there is an instinctive reaction that somebody screwed up and left a door unlocked. This only further fuels the fire that breached companies must redouble fortification and detection. That might be true, but the reality is that companies, above all else, should pivot their attention and focus to data breach response.

 

When companies trying to prevent data breaches rely too much upon customary protections of intrusion detection and fire walls, they are just as misguided as parents trying to prevent their kids from catching colds by relying upon hand-washing and multiple clothing layers. The smarter method for combating data breaches (like colds) is to focus efforts and preparation on how to contain, treat, and cure the problem, as fast and as painlessly as possible.

 

Company executives should preach this realism, rather than the fantasy of ironclad security.  Welcome to the new paradigm of cybersecurity: where technological infrastructure has expanded dramatically; where data-points reside on multiple platforms (including employee devices, vendor networks, and the cloud); and where data breaches don’t define victim companies; how companies respond to them does.

 

FINAL THOUGHTS

 

Morgan Stanley clearly made a mistake with respect to their internal systems and their slip-up probably allowed a scheming employee to steal private client data – which in turn left that data vulnerable to external threats.

 

Whether their mistake should have cost them a $1 million penalty and the scarlet letter of an SEC enforcement action is debatable. But under any circumstance, the matter sends two important messages above all else:

 

First, no firm enjoys perfect cybersecurity, no matter how sophisticated and careful.  Mistakes will happen and when they do, the SEC will pounce, enforcing its broad and sweeping Safeguards Rule in its own home field of an SEC administrative courtroom.  Second, by responding with speed, transparency, independency and vigor, Morgan Stanley, despite being penalized, actually deserves to be commended.

 

*****

 

John Reed Stark is President of John Reed Stark Consulting LLC, a data breach response and digital compliance firm. Formerly, Mr. Stark served for almost 20 years in the Enforcement Division of the U.S. Securities and Exchange Commission, the last 11 of which as Chief of its Office of Internet Enforcement. He has also served for 15 years as an Adjunct Professor of Law at the Georgetown University Law Center, where he taught several courses on the juxtaposition of law, technology and crime. He also served for five years as managing director of a global data breach response firm, including three heading its Washington, D.C. office.  Mr. Stark is also the author of, “The Cybersecurity Due Diligence Handbook,” available as an eBook on Amazon, iBooks and other eBook distribution sites.