Artificial intelligence (AI) has made its way into car wash organizations nationwide and is rapidly changing how these organizations operate and make decisions. While this technology presents opportunities for the car wash industry, including enhanced workflows, predictive maintenance, streamlined operations, reduced costs, and improved customer experiences, it has limitations and exposures that car wash employers need to consider.

            Implementing workplace policies can help ensure employers understand the potential legal, business, and reputational risks associated with using AI tools and protect against them.

Artificial Intelligence Company Policies

            Employers are using AI systems to sort through resumes, create job postings, streamline the hiring and onboarding processes, and automate HR functions. While this technology can help improve car wash organizations’ operational efficiencies, it presents certain risks. For example, AI algorithms can reinforce biased or discriminatory hiring practices even when unintentional. Additionally, AI tools’ increased monitoring of employee activities can trigger privacy issues. As the integration of AI systems becomes more widespread, anticipating the issues this technology may pose in the workplace is increasingly essential.

            Despite the potential risks of using AI tools, laws and regulations have not kept up with employers’ acceptance and incorporation of this technology. While existing laws address AI-related issues, such technology is a relatively new legal area. There is currently a patchwork of federal and state regulations that address aspects of using AI tools in the employment context; however, legal issues related to these tools will likely continue to emerge as AI technology develops and becomes more advanced.

            Because AI technology is currently unregulated, there are gray areas employers must navigate. Employers can establish governance policies and procedures to evaluate and monitor AI tools and assess the long-term impacts of these tools. Understanding how AI tools are used in the workplace can direct employers as they develop related policies. Existing workplace policies may already address AI-related risks, but employers may need to reevaluate these policies to address specific concerns. This can help ensure that car wash organizations use AI tools responsibly and integrate such technology to complement human activity in the workplace.

            For car washes operating in multiple states, the use of AI tools can present compliance problems due to varying federal and state laws regulating this technology. It is possible that using AI tools may be illegal in some jurisdictions or subject to different regulations. As such, car wash organizations must devise policies to navigate these issues if employees work in different states. 

Data Privacy and Surveillance

            AI technology can collect and analyze data to help increase workforce and organizational productivity. This can help employers transform their approaches based on AI-derived insights or tracking employee performance.

            However, employers must consider employees’ privacy rights when doing so and institute effective policies to outline and protect those rights. Some districts have imposed consent and notice requirements for using AI tools in the workplace. New York, Delaware, and Connecticut require employers to notify employees of electronic monitoring. Other states have implemented consent and notice requirements for using AI technology as an interview tool. For example, in Maryland, an employer cannot use facial recognition software during an interview unless the interviewee signs a waiver.

            Establishing policies to address these issues can help ensure that increased monitoring of employees through AI tools does not become intrusive or reveal private or confidential information. This can include disclosing how such technology is utilized with applicants and employees.

Copyright and Intellectual Property Rights

            AI-generated content can violate copyright laws or infringe on third-party intellectual property rights. For instance, conversations employees have with AI chatbots may be reviewed by AI trainers, inadvertently disclosing sensitive and confidential business information and trade secrets to third parties. This could potentially expose employers to legal risks under privacy laws.

            Additionally, employers should consider the status of any content generated using AI tools, how it is protected, and who holds the right to utilize that content before using it. Employers can review and update their confidentiality and trade secret policies to ensure they cover third-party AI tools.

            Car wash organizations can also train employees on potential copyright and intellectual property issues, ensuring inputs used to create AI-generated content do not include protected or confidential data. Employers can also restrict access to AI tools to reduce their legal risks.

Anti-Discrimination Concerns

            Using AI technology can lead to intentional and unintentional discrimination in the workplace, resulting in costly lawsuits or investigations. For example, AI algorithms used to make employment decisions may be based on historical data sets that could be biased or discriminatory — benchmarking resumes or other job requirements based on protected characteristics, such as age, race, gender, or national origin. As a result, employers should be cautious when developing, applying, or modifying data to train and operate AI tools to make employment decisions.

            The U.S. Equal Employment Opportunity Commission (EEOC) identified AI technology as a priority subject matter in its 2023- 2027 Strategic Enforcement Plan, signaling a potential increase in AI-related enforcement actions. The agency recently issued guidance regarding employers’ use of algorithms and AI tools when making hiring or other employment decisions to ensure their decisions do not violate employees’ federal civil rights.

            In 2023, the EEOC launched the Artificial Intelligence and Algorithmic Fairness Initiative to help ensure that workplace use of AI tools complies with federal civil rights laws. While employers already have anti-discrimination policies in place, they can consider instituting bias audits to impartially evaluate the disparate impacts of their AI tools on protected classes. They can also review their AI-based compensation management tools to ensure they do not violate pay equity laws. Organizations should consider doing the same with any vendors they use.

            As AI tools become more advanced, employers’ abilities to control this technology will likely become more limited. That is why car wash organizations must establish policies to ensure the ethical use of AI tools. While there are still unknowns when it comes to AI tools, employers should establish policies to account for what is known and reevaluate their policies regularly as the technology evolves. 

AI Cyber Threats for Car Wash Operations

            While AI technology offers benevolent applications, cybercriminals can also weaponize it. In an experiment conducted by cybersecurity firm Home Security Heroes, an AI tool was able to crack 51% of common passwords in less than one minute, 65% in under one hour, 71% in one day, and 81% in one month.

            As this fairly new threat continues to grow, it is imperative for car wash organizations to understand its risks and adopt strategies to mitigate these concerns. Cybercriminals can weaponize AI to seek targets and launch attacks. Examples include using AI to:

            •. Create and distribute malware through chatbots and fake videos.

            • Crack credentials and steal passwords.

            • Deploy convincing social engineering scams that trick targets into sharing confidential information or downloading malware.

            • Identify exploitable software vulnerabilities such as unpatched code or outdated security programs.

            • Efficiently disseminate stolen data.

Protect Against Weaponized AI Technology

            To protect against these vulnerabilities, businesses should be aware of the risks associated with the weaponization of AI technology and implement effective strategies to mitigate these exposures. Strategies to consider:

            • Promote the safe handling of critical data and connected devices by requiring strong passwords or multifactor identification, regularly backing up data, installing security software on networks and devices, and regularly training employees on cyber hygiene.

            • Use automated threat detection software to monitor business networks for weaknesses or suspicious activity.

            • Create a comprehensive cyber incident response plan and routinely practice it to stop cyberattacks or reduce their potential damage.

            • Secure adequate insurance coverage to provide financial protection against the weaponization of AI.

            These tactics can reduce the risk of experiencing a cyberattack and mitigate related losses. AI technology is likely to contribute to rising cyberattack frequency and severity. By staying informed on the latest AI-related developments and taking steps to protect against its weaponization, car wash businesses can maintain secure operations and minimize associated cyber threats.

Cyber Insurance

            As cyberattacks become more frequent and costly, it is crucial for car wash organizations to maximize their financial protection against related losses by purchasing sufficient insurance. Cyber coverage, also known as cyber liability insurance, can help car wash organizations pay for a range of expenses that may result from cyber incidents — including (but not limited to) data breaches, ransomware attacks, and phishing scams.

            Specific cyber insurance offerings differ between carriers. Furthermore, car wash organizations’ coverage needs may vary based on their exposures. In any case, cyber insurance agreements typically fall into two categories: first-party coverage and third-party coverage. Policyholders should have a clear understanding of both categories of coverage to comprehend the key protections offered by their cyber insurance.

First-Party Coverage

            First-party cyber insurance can offer financial protection for losses an organization directly sustains from a cyber incident. Covered losses generally include the following:

            • Incident Response Costs — can help pay the costs associated with responding to a cyber incident. These costs may include utilizing IT forensics to investigate the breach, restoring damaged systems, notifying affected customers, and setting up call center services.

            •  Legal Costs — can help pay for legal counsel to assist with any notification or regulatory obligations resulting from a cyber incident.

            • Data Recovery Costs — can help recover expenses related to reconstituting data that may have been deleted or corrupted during a cyber incident.

            • Business Interruption Losses — can help reimburse lost profits or additional costs incurred due to the unavailability of IT systems or critical data amid a cyber incident.

            • Cyber Extortion Losses — can help pay costs associated with hiring extortion response specialists to evaluate recovery options and negotiate ransom payment demands (if applicable) during a cyber incident.

            • Reputational Damage — can help pay for crisis management and public relations services related to a cyber incident.

Third-Party Coverage

            Third-party cyber insurance can provide financial protection for claims made, fines incurred, or legal action taken against an organization due to a cyber incident. Types of third-party coverage usually include the following:

            • Data Privacy Liability — can help recover the costs of dealing with third parties who had their information compromised during a cyber incident. These costs may include handling third-party lawsuits or legal disputes, offering credit-watch services, and providing additional compensation.

            • Regulatory Defense — can help pay fines, penalties, and other defense costs related to regulatory action or privacy law violations stemming from a cyber incident.

            • Media Liability — can help reimburse defense costs and civil damages resulting from defamation, libel, slander, and negligence allegations associated with the publication of content in electronic or print media. Multimedia liability coverage can also offer protection amid copyright, trademark, or intellectual property infringement incidents.

Conclusion

            AI technology is revolutionizing the employment landscape. As more car wash organizations embrace this technology, establishing proper workplace policies can help protect against related risks and prevent potential violations. Being proactive in creating AI-related policies and procedures can help identify exposures and outline strategies to address them.

            Cyber insurance can make all the difference in helping car wash organizations avoid large-scale financial losses amid cyber incidents. Consult a trusted and experienced insurance professional to discuss your car wash-specific cyber insurance needs.

Kimberly Grizzle, AAI, is the director of marketing and business development for The Insurancenter, which was founded in 1895 as a full-service independent insurance agency. A national car wash insurance program was launched in 1986. For questions regarding this article or additional insurance information, Kimberly can be reached at info@theinsurancenter.com or visit www.carwashinsurance.com.


Sources:

Zywave Inc., “Human Resources: Workplace Policies on Artificial Intelligence,” (2023).

Zywave Inc., “Risk Management, Cyber Liability: How Cybercriminals are Weaponizing Artificial Intelligence,” (2023).

Zywave Inc., “Commercial Insurance Coverage, Risk Management: The Value of Cyber Insurance,” (2023).