How Federal Agencies Can Adapt to the Unlimited Workforce

0

As agencies continue to embrace a hybrid remote working model, they are also tasked with adjusting their cybersecurity policies and guidelines. For nearly two years, we have seen a rapid deterioration in the norms that governed physical and digital workspaces. Without these standards, interpreting behavior and understanding risk has become increasingly complex. Organizations have struggled to maintain visibility of their assets, and internal and external attackers have capitalized on weaknesses revealed by ambiguity and uncertainty.

Although the transition to remote working has not been without benefits for some, such as reduced commuting time or the ability to recruit talented employees from a wide range of locations, many employees are feeling the negative impact of the long-term stress. For example, the blurring of boundaries between personal and professional life contributes to the prevalence of employee burnout. Burnout, in turn, negatively impacts employee well-being and resilience. These social and emotional factors then impact behavior, including behavior that can put an organization at risk. Employees who don’t feel well rarely perform well, and frustrations with security frictions, such as slow VPNs or multiple account logins, can cause people to take shortcuts like using storage accounts personal or personal e-mails. It is not possible to have a resilient organization without fostering a resilient workforce.

When it comes to security, agencies can no longer afford to prioritize technology over people. Security strategies must be recalibrated to meet the needs of employees while meeting the challenges of securing environments that no longer have traditional boundaries. Here’s how to start:

Challenge your assumptions about behavior

On the one hand, rules and boundaries are deteriorating due to the transition to hybrid telecommuting. On the other hand, many set rules are based on unquestioned assumptions about what employees do. These two realities are synonymous with security problems. Systems, theories, and models, such as humanistic systems theory, help us understand the critical disparities between how we imagine employees use technology and how they actually use technology, as well as how which technology we tell people to use and how they say they use technology. Too often agencies create rules based on fantasy that don’t reflect reality and the result is that the rules are ineffective and frustrating. Employees are constantly turning to creative workarounds to overcome challenges during hybrid remote work, from using their personal cloud and email apps to taking pictures of their computers and sending email. image by SMS to a person without access.

Generally, employee creativity is a good thing. But when it comes to security, this can be a big problem. In a recent survey of 3,000 workers, 47% of respondents said they use shadow IT. This kind of exposure and security risk is invisible to agencies that aren’t invested in understanding how people interact with technology. Going forward, agencies must accept that their existing assumptions and patterns about security may be insufficient or even inaccurate. To correctly interpret this new world with fewer borders, agencies must include the human element in their analysis.

Get insights from Analytics

Analytics offers a compelling way to bridge the gap between the imagined view of how employees use technology to access and interact with critical agency assets, and the messy reality that they are likely breaking the rules at course of the process.

To start, you need data. The type and amount of data an agency collects determines the types of information or meaning that can be gained from the data points. Advanced analytics, including strategies like rule-based analytics based on subject matter expertise, or even machine learning, can help agencies identify patterns in workforce behavior artwork. Establishing a data-driven understanding of what is normal ultimately allows organizations to more quickly identify and respond to risks or abnormal activity. Open data mining can also help organizations identify the severity of risks that may have previously been accepted. For example, if an organization did not have a policy against using USB flash drives to store or transfer data, but noticed through analytics how much data was moved via USB, they can choose to change its policy. The analysis can also help agencies prioritize which risks to address first, which is particularly useful when resources or expertise are scarce.

As an agency’s analytical capabilities mature over time, it is increasingly possible to enable dynamic threat response and risk-adaptive policies that can significantly reduce response time. and risk exposure.

Privacy first

Using data and analytics to understand employee behavior doesn’t have to come at the expense of privacy. Instead, agencies should strive to anonymize data and limit the visibility of raw data or identifying information as much as possible. Group behavior, rather than individual behavior, can also be a powerful tool for understanding organizational health and for identifying risk.

Behavioral analysis often generates risk scores related to an individual person. Hiding credentials in the user interface protects privacy, but it is also a valuable bias reduction strategy for security team members investigating risky users. Recent research indicates that bias challenges the effectiveness of insider threat missions and challenges the effectiveness of security programs. For example, investigators may be hesitant to investigate a high-ranking official or friend, and may be more or less likely to investigate someone based on demographic factors such as a person’s gender or even name.

It is possible, and necessary, to be an advocate for privacy as well as an advocate for understanding the use of data by the workforce. In fact, if agencies fail to monitor user behavior and something goes wrong, they will be criticized for missing the threat. By anonymizing data, using role-based controls to reduce access to identifiable information, and auditing internal investigative behaviors, agencies can identify and respond to cybersecurity threats without creating additional stress for employees.

The bottom line is that controlling employee behavior is not the goal in this new hybrid remote work environment. Instead, it’s the ability to proactively understand and respond to dynamic changes in the workplace.

Dr. Margaret Cunningham is a Principal Human Behavior Researcher in ForcePoint’s Global Government and Critical Infrastructure (G2CI) group, which focuses on establishing a human-centric model for improving cybersecurity. Previously, Cunningham supported technology acquisition, research and development, operational testing, evaluation and integration for the US Department of Homeland Security and the US Coast Guard.

Share.

Comments are closed.