Running head: CYBERCRIME IN THE US: A LEGAL PERCIPECTIVE
CYBERCRIME IN THE US: A LEGAL PERCIPECTIVE 27
Cybercrime in US: A Legal Perceptive
Authors Name:
Institution Affiliation: University of the Cumberland’s
Date: 06/05/2020
Table of Contents Abstract 3 Introduction 4 Cybersecurity Accounts 5 Evolution of Cyber threats and cyber laws 6 Concerns and Encounters of cybercrime in the United States 8 Topmost crimes in the US 10 Malware 10 Debit card fraud 11 Compromised passwords 11 Social phishing 12 Impacts of Cybercrime to the United States 12 Cybercrime an society 13 Cybercrime and finance 13 Impacts of cybercrimes on laws and regulations in US 14 A Legal Framework 15 The Gramm-Leach Biley Act (GLBA) 16 The Health Insurance Portability and Accountability Act (HIPAA) 18 Other significant acts (“Federal Trade Commission, Electronic Communications Privacy Act”) 19 Combating cybercrimes 20 Assimilating robust security measures 21 Updating the operating systems 21 Training and awareness 21 Encryption and backing up data 22 The future of cybercrimes and change in cyber law landscape 22 Conclusion 23 References 25
Abstract
The expansion of the internet has unbolted up more gaps form cybersecurity and to curb the epitome of incompetency and move beyond just conforming rudimentary measures the outlook of legal laws sinks the gap between technology and the need for cybersecurity. Though the internet has created opportunities for developing economic and social networks, research has proven that at least 75% of the United State population has been exposed to cyber threats either intuitively or involuntarily.
In the pursuit of legal value, the following research looks at how cybersecurity has affected the United States, commencing the research by expanding the account of cybercrimes, looking at the types of cyber threats, and effects of cyber threats to the society and the government as a whole. Then the research expands on how technology has affected internet security, issues, and challenges of cybersecurity in the United States and top crimes that have affected businesses and citizens in the country. Furthermore, the research looks at the legal perspectives of the United States in curbing cyber threats. Under the law perspectives, the research focuses on existing acts at different sectors that protect the CIA triad, the role of the laws in curbing cybercrimes, and how cybercrimes have affected the legal perceptive in US. To conclude, the research looks at the future of cybercrimes and how the dynamic deviations in technology and cybercrime will affect the authority and dispense of cyber laws.
Introduction
According to statistics conducted in US, at least, 32.7% of the respondents indicated that they had experienced hacking activities either through their social media accounts, or an internal activity such as email phishing. The Stata report indicates that as of 2018 at least 37.2% of US internet users were susceptible to internet banking fraud, 57.6% were vulnerable to malicious software, 34.1% to identity theft and 69.7% of users’ were vulnerable for phishing activities such as fake phone calls, SMS, and fake emails (Clement, 2019).
A report produced by, Waghole (2019, pg. 518) indicated that by 2021, it’s projected that cybercrimes will increase the cost of operations by up to $6 trillion. The authors indicate that as of 2017, over 780,000 records were exposed to cyber-attacks and today the United States has accounted for 28% of the data breaches experienced globally. Following a report produced by the National Computer Security Survey (NCSS) which purposed on discussing the impact of cyber threats on business in the United States (Rantala, 2008) indicated that at least 67% of the businesses experienced at least one cybercrime in their systems, 11% of the companies experienced cyber theft, and 24% of the business experienced external threats related to cyberattacks. The report becomes even more intriguing as it outlines that the businesses did not consider the legal perspective of the threats as they did not report the incidents to law enforcement. In 2015, approximately 68% of the victims accounted for a loss of above $10000 due to cyberattacks, and most of the businesses experienced a system downtime that reduced their profitability margin.
Cybersecurity accounts
The account for cyber threats is dated back in the 1970s when the initial phone system became vulnerable. Back then, hackers were referred to as “phreakers” and they discovered that mobile phones had unique codes and tones that if twerked could be exploited. After incessant experimentation over hardware chips across different telephone companies, for example Bell Telephone Company, the hackers were able to conduct a social engineering scheme that tricked the Bell company worker to hand over the telephone codes that were ultimately utilized to access the internal system.
At that time, there existed a conflicting factor between law and internet security. The United States didn’t have defined legislation regarding actions that could be taken in case of such activities. According to Shinder and Cross (2008), it was not clear that information systems stood as complex systems yet delicate structures that could be vulnerable to attack and resulted in the enactment of the first federal law in 1986. As technology became more and more intricate superior cybercrimes emerged, for example, the Morris Worm virus that affected more than 6000 computers, and accounted for damage of over $98 million.
In 1990 the United States formed the Electronic Frontier Foundation (EFF) that would respond to cyber threats occurring due to overzealous mistakes. Over the past decade, a lot of fraudulent activities have emerged, some even causing fortunes to individuals and the government as a whole. For example, in 2007, a hacker, Max Butler, employed a’ network phishing techniques and was able to fraud over $86 million. Likewise, in 1999, the NASA Defense system was hacked by a 15-year-old, Jonathan James, by installing a backdoor on the US. Departments’ server and the hack allowed him to access millions of government emails and internal systems including systems depicting personally identifiable information (PII) for various military agencies. According to the Forbes report produced by Catherine (2006), the NASA department was forced to shut for three weeks to repair the damage.
Recently, due to the intensification of technology, many organizations have become vulnerable to attack, others now recording huge losses. For example, in 2014, when eBay- an e-commerce company, lost user names, addresses, and passwords which accounted for over $145 million. Also, the CryptoWall system was exposed that accounted for $325 million. In 2017, Equifax, a US-based credit firm, exposed over 143 million users’ accounts and the company incurred huge losses due to government laws.
Evolution of cyber threats and Cyber laws
In the present day, cyber-attacks have evolved from landline hacking to crypto-jacking. According to Madsen (2019), recently, the nature of cyber threats have advanced to survive. Even though cyber experts are assimilating superior techniques to curb the exponentially growing cyber threats and close security loopholes. Research indicates that there is a hacking activity every 39 seconds meaning that the cybercrime world is continuously evolving and if stiff measures aren’t assimilated, the alarming growth rate of cyber-attacks will overwhelm cybersecurity measures and practices.
A new era of cybercrime has emerged as from the development of the first virus by Morris in 1988 and the new century has delivered even more convoluted malware such as worms and Trojans. An even more complex cyber threat in the 20th century has cut an edge to many organizations –cyber jacking. With the technique, a hacker can incinerate a malicious JavaScript code into a user browser and “harvesting” processing power of devices that have accessed the browser and the user mines cryptography from the websites without a user’s consent. Thought-provoking, right?
The evolution of cyber threats now leaves a question to the United States government and the role of the constitution in regards to protecting the citizens against cyberattacks. In the United States, the data security measures are promulgated by the federal, state and local government depending on the degree of severity and intricacy of a cyber threat. As a supplementary to constitutional laws, data security landscape is backed up by regulatory codes and contract obligations documented either between businesses and people, business and businesses or government and businesses.
As an obligation to mitigating cyber threats, businesses are not only driven by applicable laws but are also driven by civil litigation threats, security infringements, and regulation pressures. Likewise, the United States government has extended information access right as from prohibiting the government itself from accessing users’ data to restriction privacy companies utilizing the information including data that does not directly affect users. According to Kurth (2018), there has been a far-reaching progression in data security acts whereby today the US government have enacted a breach notification act whereby organizations are required to notify the clients and the federal government of any breach activity if it affects more than 25 people. Such laws have influenced the re-engineering of other laws especially in Europe such as the General Data Protection Regulation (GDPR).
The data and regulatory landscape in the United States are constantly budding with the dynamic change in technology and the escalation of cyber threats whereby subdivisions are constantly requesting the proliferation of new legislation and incorporation of new amendments to the prevailing US constitution sector addressing cybersecurity laws. The evolving sign of law proliferation is showing no sign of abating as the shift seemingly cut across state and federal level.
The federal law delineating breach notifications is recurrently been amended to address emerging cybersecurity threats, for example, cyber jacking, therefore covers a larger background of imposing aggressive laws and sanctions. Nonetheless, federal government agencies, private regulators such as the “Federal Trade Commission (FTC), Consumer Financial Protection Bureau (FPB), Security and Exchange Commission (SEC), the protection Bureau of Health and Human Services (PBHS)” often update and review the existing laws and policy statements regarding the usage and dissemination of public data. As of 2017, the US government has been keen when it comes to the use of internet. For example, the Electronic Communication Privacy Act was reformed by the federal legislation which out-ruled the initial rule of 180 days whereby companies were allowed to delete emails older than 180 days in their systems.
Concerns and Encounters of Cybercrime in the United States
Though the United States government is obligated to protect users from cybercrimes, the current technology has created obstacles when it comes to investigating cybercrimes in the state. According to Katharina (2019) even though cybercrime clashes in tandem with cyber laws, there exist changes such as anonymity and attribution. Anonymity enables users to conduct online activities without revealing their actual locations. An example, when a user assimilates a proxy server to create a connection. Basically, the proxy server stands as an intermediary server that connects a computer to a client’s server over an anonymous device to create a secure connection. The structure of such a connection enables the user to hide their internet protocol address. Anonymization can be legitimate whereby a user could have a substantive reason for navigating the internet over a masked IP address. An example if the user wants to express their rights to communication or their desired thoughts without revealing their identity.
Conversely, cybercriminals can assimilate anonymity to encrypt their network and hide their identity for example a cybercriminal using browsers such as Freenet, tor browser to create an anonymous network. With anonymization, the government cannot differentiate individuals who are on the internet to conduct decent activities and those culprits who have intentions of hacking user activities. Also, it becomes difficult to trace the origin of a hacking activity as the connections are masked across anonymous IP addresses.
When it comes to attribution, a user or an entity is supposed to differentiate who or what is responsible for an explicit cyber threat. The goal of the process is to attribute and connect particular cyber threats to a specific device or an entity, for example, identify state-sponsored cyberattacks. Attribution is further complicated by the use of anonymity and “malware-infected zombies” such as botnet and malware devices that can create a connection over a remote network
Recently back-tracing has become a challenge – the process by which a cybercrime is traced back to the original sender either by unmasking the IP addresses or using log entries. Back tracing can be a challenge and time consuming as it depends on the level of expertise, and the complexity of the connection. Also, privacy laws restrict the government from accessing users’ identities. For example, when trying to identify a cyber threat, a regulation agency may be obligated to obtain the IP address of the user through the Internet Service Provider. From time to time, the ISP provided cannot just provide details of the IP addresses without a proper legal consent, sometimes even mandating the investigators to obtain a subpoena or a warranty order.
Lack of international standardization limits the government from investigating cybercrimes when it comes to international matters. For example, if the United States government is accusing another country of cyber-attack, the evidentially prerequisite requires the consent of admissibility in court and consent of responsibility for an international state.
Topmost cybercrimes in the US.
It would be prejudicial to discourse cybercrimes without at least asserting the topmost cybercrimes heart-rending the United States. Reports indicate that 8 out of 10 people in the US have reported a cyber threat case or a close person who has been affected by cyber threats. It’s only just to testify that now Americans are more worried about cybercrimes than any other crime in the city. In an annual cybercrime survey, Norton Agency produced a report indicating the topmost cyber threats including malware debit card fraud, password hijacking and social media access (Norton Cyber Security, 2017).
I. Malware
According to the Norton survey, 57% of the individual indicated that their devices either smartphone or laptops had been affected by malware. Malware is a general term representing any type of software developed to disrupt an event of an information system, or cause damage. Example of malware includes viruses, ransomware, worms, Trojans, spyware, etc. A virus alters the file system of the user so that when the user runs a certain task the virus is simultaneously executed. Many organizations have been able to curb viruses and they comprise up to 10%.
Just like a biological worm, a computer worm replicates itself and spreads through the system without the users’ consent. Trojans are unique as they masquerade as authentic programs, for example, pop ups that request the users to run the program as a measure of cleaning the PC. It would be incomplete not to mention ransomware, whereby the malware gains access to user’s files, block user access to those files, and in return requests the user to pay a given ransom to regain access to such files.
II. Debit card fraud
A debit card or credit card fraud accounted for the second-highest type of cybercrime in United States. Often users receive a cynical message such as “card-not-present”. The fraud has ballooned in the United States recording losses of up to $4.76 billion in 2016. As depicted by Hadar (2019) in the Washington Post, over 80% of credit card user information has been compromised, for example, the Equifax data breach case that exposed over 130 million credit card numbers.
III. Compromised passwords
Passwords are delicate structures in which users opt to think that they are secure but surprisingly, passwords are exposed by unpredictable devices or platforms such as most trusted banks. When a data breach occurs, the hacker are able to access users’ credentials inclusive of the passwords. According to a Forbes report by Winder (2019), over 4 billion passwords were exposed in 2019 which accounted for over 44 million Microsoft shareholders. Users’ are advised checking their password credibility, by espousing strong passwords such as a combination of letters, alphanumeric and special characters.
IV. Social phishing
Following a Stata report by Clement (2020) conducted in 2019, over 79% of users had a social network profile. With the proliferation of technology, it’s factual to conclude that the percentage is projected to intensify by 85% in 2021 since social media networks have been an open platform for users, and a vulnerable platform whereby it’s informal to access personal data. Statistics show that Facebook security breach exposed over 50 million user accounts which accounted for at least half the US population. It’s then recommended that social media fans avoid posting private information on social pages such as birthdays, dates of birth, and place of the location to prevent possible trace trough the PII.
Impacts of cybercrime to the United States Economy
Cybercrime has created a global impact as from the society, economy, and social status to financial impacts. It is evident that information technology has created a variety of benefits in education, the health sector, finance, communication just to mention a few. It’s the same technology that has ushered a wave of cyber threats. According to Ashford (2018) at large, the influx of cybercrime has affected sectors that were thought to have superior technology. The dominance of technology in the public and the aftermaths associated with its integrations has demonstrated a need to protect information systems and assets at all levels of operation not only by the public, businesses but also to the United States government as a whole.
I. Cybercrime and society
The society is at large assimilating technology to simplify their day to day activities. For example, society has integrated technology to ease communication through social media features. The society needs technology to simplify education, and such platforms request users to pass their personal information. The payment process has been centralized through technology, whereby rather than visiting the monetary sectors the users can access the services from a remote location. According to Rahaman (2016), technology has enabled users to save time, save costs, ease mobility, and create a landscape for innovations. With the dynamic changes and proliferation of technology, it’s almost impossible to exhaust the imperativeness of technology.
As society continues to integrate technology in their personal lives, the technology builds up troves of personal information, centralizing consumer’s personal information as from bosom data to mundane information. An increase in ambiguousness and digitalization has made society more prone to cyber threats. Technology affects the society if the consumers’ loss their financial status or if the technology is utilized to steal intellectual property, activities such as identity theft, loss of privacy, and finance cost incurred when assimilating security measures.
II. Cybersecurity and finance
The influx activities of cybercrimes in the finance sectors can be attributed to several factors such as easy access to tools and technology that infiltrate financial systems, innovative technologies and ideas invented by the hackers, increase in the level of intelligence by the cybercriminals, and expansions of cybercrime midpoints, for example, anonymous browsers such as tor browser and hacking communities.
In the finance sector, cybercrime has directly or indirectly impacted economic growth, job development, investment, and financial trust, for example, consumers will shift to financial vendors who provide superior and secure services, experienced today whereby consumers are now preferring bitcoin technology since its more secure and cost-effective
Cybercrimes such as the use of stolen IP addresses, access to confidential financial information, manipulation of monetary information, and security mitigation cost have stood as some of the most devastating effects of cyber threats. Exposure of such information have caused financial sectors to incur the huge cost associated with federal laws, cost incurred to pay the clients, have lost reputation, even obligating other businesses to close out of the blue. Consider the case of JP Morgan a financial company that incurred a loss of over $ 100 million.
III. Impact of cybercrimes on laws and regulations in the United States
As the landscape of cybercrime increases, the US government is continuing to strengthen the existing laws to broaden the legislation process. According to Jay (2016) other researcher tries to examine whether the augmentation of cybercrime has affected the operation of the United States government and to what degree. Of course, as cyber threats increase the United States government is mandated to add more laws, departments such as forensics to analyze new cyber threats and develop relevant countermeasures. Also, the government is prompted to conduct periodic meetings to review and update laws affecting cybersecurity. For example in 2015, the government developed the Cybersecurity Act that mandated the organization to share any type of threats that would probably explore an information system.
Due to cybercrimes, the government was mandated to change the federal cybercrime laws in 1984, which affects all US citizens. For example, in 1984, the government passed a Computer Fraud and Abuse Act (CFAA). The new CFAA prohibited any user or organization from obtaining information directly related or national security, prohibit user to access a computer to defraud value, just to mention a few.
From a larger perspective, cybercrimes have mandated the European Union composed of United States, Latin America, and Asia-Pacific, to adapt new NIS directives, fronted to improve security on information security, and encouraging the members to integrate supplementary steps such as having a computer security incident team (CSRIT) to curb the internet menace( Mendoza, 2017).
A legal framework.
Privacy and data security are encapsulated by laws at federal and state governments that conjoin to a legal framework that protects personal information. The patchwork is promulgated at the industry and state level since the US constitution lacks a patchwork for federal data protection laws. The underlying question is, who follows under the legislative framework, and what type of information is protected under the legal framework?
Since the United States constitution lacks an all-inclusive federal data protection policy, the applicability of the legal frameworks differs on both state and federal levels. At the federal level the framework obligates to maintain the scope and applicability of the laws by the industrial sector. By the same token, at the state level, the protection laws apply to the enterprises that manage specific information regarding residents residing at the state meaning that a business is subjected to state laws if it maintains information regarding a citizen residing in a particular state though the business is not instituted at that particular state.
There does not exist a valid definition of “personal information” across state and federal regimes. From a general viewpoint, personal information can stand as data that alone can easily track or trace to an exact person. Examples of such data include a person’s name, i.e., first and last name, bank details, and social security numbers. From a law perspective, for example considering the state notification laws, personally identifiable information (PII) can be depicted as a person’s name combined with other information such as driver’s license information, billing records, place of education, etc.
According to S-Pl, the agency responsible for setting laws in the finance sector, personal information can be any type of information provided by the clients to obtain a financial service or product. The laws also cuts across results obtained by the financial agency as a result of a financial transaction and information about a consumer obtained indirectly but connects to service or products information. For example, when a company outsources to a third and user information is accessed remotely.
I. The Gramm-Leach Biley Act (GLBA)
GLBA also is known as the Financial Modernization Act established in 1999 is a law under the US federal law that requires the financial sector to provide consent of how they share and protect consumers’ information. According to Groot (2019) for an organization to be GLBA compliant the sector has to satisfy a series of conditions such as notify their clients how they share personal information, inform the consumers their rights when it comes to usage and storage of their data. For example, provide the customers with an opportunity to opt-out if they feel that their data is been misused. Finally, the company has to protect users at all costs, per the security plan developed by the enterprise and relevant government laws.
From a legal perspective, GLBA has benefited consumers in protecting their information, therefore, reduces the vulnerability of cybercrimes. Companies complying with the GLBA rules are at a inferior risk of possible penalties and also loss of reputation associated with loss of consumers’ data. According to the act, all financial institution found accountable of non-compliance face a penalty of $100 000 for each violation. Individual found to violate the act faces a fine of $10000 for each violation and can be termed to jail imprisonment of up to 5 yrs.
To create trust, reliability, and curb cybersecurity, the financial institution is required to:
· Protect all privacy information against any unsanctioned access.
· The financial institutions are required to notify the consumers of any data sharing with third parties and can opt-out from the sharing process.
· Are mandated to track user activities, for example, a user trying to access protected records.
· Financial institutions are required to assess the degree of clients’ data sensitivity and compare to control measures in place as a measure of evaluating the level of system competency.
· Financial institutions are required to implement a security program and test it.
· In the case of outsourcing, the institution is required to select service providers who are suitable to safeguard both corporate and clients’ data.
II. The Health Insurance Portability and Accountability Act (HIPAA)
According to the Center of Disease Control and Prevention (2019), the HIPAA act of 1996 is a law under the federal regime that highlights standards and policy to all health sectors protecting sensitive patients’ information from been shared with external sources without the patients’ consent and mandating the health sector to protect patients’ information? In the past decade, the health sector has experienced the largest data breaches in the United States accounting for at least 42.7% of all data breaches in the United States. The major goal of the privacy rule was to protect health information while data flows across the health sectors to warrant provision of better medical care. For example, a research center such as CDC can request access to particular medical information as a strategy of developing a new drug, therefore the sector has to share health data in a stratagem that does affect the patients.
Examples of individuals covered by the privacy rule include health providers, despite the health practice or the size of the entity as long as they are directly involved in the electronic transmission of health data. Health plans, including entities that either provide medical care services or pay for medical care services. For example, insurers providing a dental petition, or prescription of a drug. Church-sponsored and employer-sponsored groups are part of health plans. The healthcare clearinghouses depicting those individuals who receive identifiable health information from the health entities to convert or process such data into manageable or understandable formats. Such individuals include data analysts. Finally, business associate. These are individuals or sectors utilizing and disclosing health identifiable information to satisfy particular functionality, for example, research centers.
The department of Health and Human Services (2013), documents that covered entities are permitted but restricted to disclose user information without the consent and authorization of the individual for status quo such as:
· Exposure of personal details whereby if the information is to be recycled for other purposes the entity must obtain a disclosure from the individual or scrap data that can directly locate back the individual.
· Disclose information regarding payment methods, treatment activities, or health operations such as providing x-ray scans.
· Under the HIPAA security rules, the entities are required to satisfy the CIA triads (confidentiality, integrity, and accountability) of all health data transmitted electronically.
· Required to detect possible threats that could make personal health information vulnerable to attack
· Are required to periodically certify compliance.
III. Other significant acts (“Federal Trade Commission, Electronic Communications Privacy Act”)
The Federal Trade Commission enforces the Trade Act prohibits unfair acts when it comes to conducting trade. Such unfair acts cut across electronic activities that expose parties’ data privacy or exploits the security content of the users. As part of their responsibilities, the act collects and investigates companies that break laws in trade and also educates the consumers on their rights when it comes to providing consent of their information.
On the other hand, the electronic communication privacy act (ECPA), was initially developed to prevent the government from accessing information from private communications. For example, the government tapping a user’s communication without the users’ consent. The act covers interception of communication in realms of private organizations wiretapping and eavesdropping through an electronic device or organizations and users possessing information devices that wiretaps user data and protects disclosure of private information obtained unlawfully from the unsanctioned interception.
Combating Cybercrime
At present-day devices have become connected than ever before and it’s projected to increase as more and better devices are been developed every minute. Nonetheless, even after developing an innovative paradigm, the connections have amplified the risk of fraud attack and more so cybercrimes. Though the Department of Homeland Security is working with the federal and state government to combat cybercrimes, users and companies can assimilate various measures to protect themselves against cybercrimes. ‘
I. Assimilating tight security measures
A constricted security measure can be achieved by either using a robust password or a multifactor authentication strategy. When developing passwords it’s advisable to create a password with a combination of letters, special characters, and numeric. Also, statistics have shown that most of the US population uses a similar password in multiple platforms, for example, the social media password is alike to the email password. According to Johnson (2019), users should use dissimilar passwords to reduce the severity of the attack in case a data breach occurs.
Multifactor authentication enables the user to verify their credential more than one time, for example, a user may use a password and a secret code sent to the personal device.
II. Updating the operating system.
The user is supposed to update the operating systems when they receive the notification. First and foremost, the user is supposed to verify the legitimacy of the vendor. System updates assist the existing system to keep an update of recently introduced malware. Also, the system is much stable when it comes to mitigating malware. Since an enterprise infrastructure maybe is large, enterprises can update their hardware and software periodically.
III. Training and awareness.
The first step in mitigating cyber threats is awareness. It’s the responsibility of the enterprise to create consciousness on possible malware attacks and train them on possible ways to mitigate the cyber-attacks. For example, training the workers the type of email to avoid so that they can be able to differentiate between spasm emails and corporate emails. Likewise train the employees on possible ways to create a strong password, etc.
IV. Encryption and backing up data
According to Popat (2018), a cybercrime prevention policy should comprise two events, a protection criteria’s and a recovery mechanism. Organizations can always achieve the protection latter by securing their activities, for example, encrypt corporate data, internet connections, either internal or remote, and communications with stakeholders. After encryption, the organization should back up the data. Even after a data breach, an organization can easily recover from the incidents if it had a backup drive.
Other protection criteria’s include, securing the corporate hardware from logical and physical access, investing in cyber insurances, seeking advice from an external expert, such as security auditors, developing a workplace that assimilates a security-focused culture, and users should avoid public network, for example, wireless fidelity offered on parks or restaurants for free.
The future of cybercrimes and change in cyber law landscape
As more and more devices are becoming connected to the internet and each other it is becoming difficult to monitor all the devices at runtime. Furthermore, most of the businesses in the United States and the world, in general, are shifting to new technologies such as cloud computing, cryptography, machine learning, artificial intelligence, big data, just to mention a few. Businesses are now left to answer the question what’s next? What’s next for cybercrimes, or what measures are organizations assimilating spearheaded to combat cybercrimes activities in the future? Following a report produced by Poremba (2019) showing a survey conducted by Juniper Researcher, it projected that by 2024, organizations will have incurred a cost of $4 trillion in terms of data breaches.
Again it’s imperative to look at the standing position of the United States government in combatting future cybercrimes, i.e., what measures is the government undertaking to combat cybercrimes in the next century. Due to the growing rate of cybercrimes the law enforcement is mandated to keep up the pace. To keep up with the law perspectives the government needs more skilled forensic engineers, up to date prosecutors, and a familiarity with new cybercrime threats. States such as California have been able to keep up with the pace by assimilating a multiagency task force. Likewise, the government is developing more computer crime central such as those located in South Carolina (Wolf, 2009).
Conclusion
No account centering on cyber-crimes would be comprehensive without a look at cybercrimes in the US and its legal perspectives. It’s only impartial to conclude that even though the federal and state government have assimilated superior measures such as periodic updating of the laws, and setting up more aggressive laws policies, technology has overwhelmed the legal perceptive in the United States. The ever-changing landscape of technology has laid out a robust background for cybercriminals to execute unlawful activities, through techniques such as anonymity and attributions. Likewise, malware is changing their structures endlessly therefore it has become difficult for the government to curb cybercrimes as a whole. Though cybercrimes have proven to be complex, consumers can begin the race towards combating the threats by blending in superior techniques such as the creation of strong passwords and creating awareness. It’s unpredictable what the future of law entails in the United States due to the existence of a dynamic technology but citizens can conform to superior approaches that will at large mitigate the cyber threats.
References
Ashford, W. (2018, February 21). The economic impact of cybercrime is significant and rising. ComputerWeekly.com. https://www.computerweekly.com/news/252435439/Economic-impact-of-cyber-crime-is-significant-and-rising
Catherine, W. (2006). 15-Year-Old admits hacking NASA computers. ABC News. https://abcnews.go.com/Technology/story?id=119423&page=1
CDC. (2019, February 21). Health Insurance Portability and Accountability Act of 1996 (HIPAA). Centers for Disease Control and Prevention. https://www.cdc.gov/phlp/publications/topic/hipaa.html
Clement, J. (2019). U.S. cyber threats 2018. Statista. https://www.statista.com/statistics/767993/most-common-cyber-threats-usa/
Clement, J. (2020). U.S. population with a social media profile 2019. Statista. https://www.statista.com/statistics/273476/percentage-of-us-population-with-a-social-network-profile/
Groot, J. D. (2019, July 15). What is GLBA compliance? Understanding the data protection requirements of the Gramm-Leach-Bliley act in 2019. Digital Guardian. https://digitalguardian.com/blog/what-glba-compliance-understanding-data-protection-requirements-gramm-leach-bliley-act#:~:text=To%20be%20GLBA%20compliant%2C%20financial,private%20data%20in%20accordance%20with
Hadar, M. (2019, September 11). Think your credit card is safe in your wallet. The Washington Post. https://www.washingtonpost.com/business/think-your-credit-card-is-safe-in-your-wallet-think-again/2019/09/11/05e316e4-be0e-11e9-b873-63ace636af08_story.html
Jay, J. (2016, December 1). Impact of law and regulation on cybercrime. John Jay College of Criminal Justice. https://www.jjay.cuny.edu/impact-law-and-regulation-cybercrime
Johnson, J. (2019, September 25). Protecting yourself from cybercrime. Forbes. https://www.forbes.com/sites/joeljohnson/2019/09/25/protecting-yourself-from-cybercrime/#5f99b5111d84
Katharina, K. M. (2019, March). Cybercrime module 5 key issues: Obstacles to cybercrime investigations. United Nations Office on Drugs and Crime. https://www.unodc.org/e4j/en/cybercrime/module-5/key-issues/obstacles-to-cybercrime-investigations.html
Kurth LLP, H. A. (2018, April 6). Data security and cybercrime in the USA. Lexology. https://www.lexology.com/library/detail.aspx?g=04cbbd2d-da22-4a97-b577-3121fea0cff0
Madsen, C. (2019, July 25). The evolution of cybercrime. Webroot Blog. https://www.webroot.com/blog/2019/04/23/the-evolution-of-cybercrime/
Mendoza, M. A. (2017, March 13). Challenges and implications of cybersecurity legislation. WeLiveSecurity. https://www.welivesecurity.com/2017/03/13/challenges-implications-cybersecurity-legislation/
Norton Cyber Security. (2017). Top 5 cybercrimes in the U.S., from the NCSIR. Official Site | Norton™ – Antivirus & Anti-Malware Software. https://us.norton.com/internetsecurity-online-scams-top-5-cybercrimes-in-america-norton-cyber-security-insights-report.html#compromised
Popat, A. (2018). Five ways to protect your company against cyber attacks. Entrepreneur. https://www.entrepreneur.com/article/316886
Poremba. (2019, September 18). The future of cybercrime: Where are we headed? Security Intelligence. https://securityintelligence.com/articles/the-future-of-cybercrime-where-are-we-headed/
Rahaman, M. (2016). Cybercrime affects society in different ways.
Rantala, R. R. (2008). Cybercrime against businesses, 2005. US Department of Justice, Office of Justice Programs, Bureau of Justice Statistics.
Shinder, D. L., & Cross, M. (2008). Scene of the Cybercrime. Elsevier.
U.S. Department of Health & Human Services. (2013, July 26). Summary of the HIPAA Privacy Rule. HHS.gov. https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html
Waghole, S. N. (2019). Cyber Crime Statistics. Journal of the Gujarat Research Society, 21(14s), 518-523.
Winder, D. (2019, December 14). Has your password been stolen? Here’s how to find out. Forbes. https://www.forbes.com/sites/daveywinder/2019/12/12/has-your-password-been-stolen-how-to-find-out-crime-hacking-tutorial-tech-help/#6340b9bd570f
WOLF, U. (2009, January 27). Cyber-crime: Law enforcement must keep pace with tech-savvy criminals. Government Technology State & Local Articles – e.Republic. https://www.govtech.com/dc/articles/Cyber-Crime-Law-Enforcement-Must-Keep-Pace.html
Branching Paths: A Novel Teacher Evaluation Model for Faculty Development
James P. Bavis and Ahn G. Nu
Department of English, Purdue University
ENGL 101: Course Name
Dr. Richard Teeth
Jan. 30, 2020
2
Abstract
A large body of assessment literature suggests that students’ evaluations of their teachers
(SETs) can fail to measure the construct of teaching in a variety of contexts. This can
compromise faculty development efforts that rely on information from SETs. The disconnect
between SET results and faculty development efforts is exacerbated in educational contexts
that demand particular teaching skills that SETs do not value in proportion to their local
importance (or do not measure at all). This paper responds to these challenges by proposing an
instrument for the assessment of teaching that allows institutional stakeholders to define the
teaching construct in a way they determine to suit the local context. The main innovation of this
instrument relative to traditional SETs is that it employs a branching “tree” structure populated
by binary-choice items based on the Empirically derived, Binary-choice, Boundary-definition
(EBB) scale developed by Turner and Upshur for ESL writing assessment. The paper argues
that this structure can allow stakeholders to define the teaching construct by changing the order
and sensitivity of the nodes in the tree of possible outcomes, each of which corresponds to a
specific teaching skill. The paper concludes by outlining a pilot study that will examine the
differences between the proposed EBB instrument and a traditional SET employing series of
multiple-choice questions (MCQs) that correspond to Likert scale values.
Keywords: college teaching, student evaluations of teaching, scale development, EBB
scale, pedagogies, educational assessment, faculty development
3
Branching Paths: A Novel Teacher Evaluation Model for Faculty Development
According to Theall (2017), “Faculty evaluation and development cannot be considered
separately … evaluation without development is punitive, and development without evaluation is
guesswork” (p. 91). As the practices that constitute modern programmatic faculty development
have evolved from their humble beginnings to become a commonplace feature of university life
(Lewis, 1996), a variety of tactics to evaluate the proficiency of teaching faculty for development
purposes have likewise become commonplace. These include measures as diverse as peer
observations, the development of teaching portfolios, and student evaluations.
One such measure, the student evaluation of teacher (SET), has been virtually
ubiquitous since at least the 1990s (Wilson, 1998). Though records of SET-like instruments can
be traced to work at Purdue University in the 1920s (Remmers & Brandenburg, 1927), most
modern histories of faculty development suggest that their rise to widespread popularity went
hand-in-hand with the birth of modern faculty development programs in the 1970s, when
universities began to adopt them in response to student protest movements criticizing
mainstream university curricula and approaches to instruction (Gaff & Simpson, 1994; Lewis,
1996; McKeachie, 1996). By the mid-2000s, researchers had begun to characterize SETs in
terms like “…the predominant measure of university teacher performance […] worldwide”
(Pounder, 2007, p. 178). Today, SETs play an important role in teacher assessment and faculty
development at most universities (Davis, 2009). Recent SET research practically takes the
presence of some form of this assessment on most campuses as a given. Spooren et al.
(2017), for instance, merely note that that SETs can be found at “almost every institution of
higher education throughout the world” (p. 130). Similarly, Darwin (2012) refers to teacher
evaluation as an established orthodoxy, labeling it a “venerated,” “axiomatic” institutional
practice (p. 733).
Moreover, SETs do not only help universities direct their faculty development efforts.
They have also come to occupy a place of considerable institutional importance for their role in
4
personnel considerations, informing important decisions like hiring, firing, tenure, and
promotion. Seldin (1993, as cited in Pounder, 2007) finds that 86% of higher educational
institutions use SETs as important factors in personnel decisions. A 1991 survey of department
chairs found 97% used student evaluations to assess teaching performance (US Department of
Education). Since the mid-late 1990s, a general trend towards comprehensive methods of
teacher evaluation that include multiple forms of assessment has been observed
(Berk, 2005). However, recent research suggests the usage of SETs in personnel decisions is
still overwhelmingly common, though hard percentages are hard to come by, perhaps owing to
the multifaceted nature of these decisions (Boring et al., 2017; Galbraith et al., 2012). In certain
contexts, student evaluations can also have ramifications beyond the level of individual
instructors. Particularly as public schools have experienced pressure in recent decades to adopt
neoliberal, market-based approaches to self-assessment and adopt a student-as-consumer
mindset (Darwin, 2012; Marginson, 2009), information from evaluations can even feature in
department- or school-wide funding decisions (see, for instance, the Obama Administration’s
Race to the Top initiative, which awarded grants to K-12 institutions that adopted value-added
models for teacher evaluation).
However, while SETs play a crucial role in faulty development and personnel decisions
for many education institutions, current approaches to SET administration are not as well-suited
to these purposes as they could be. This paper argues that a formative, empirical approach to
teacher evaluation developed in response to the demands of the local context is better-suited
for helping institutions improve their teachers. It proposes the Heavilon Evaluation of Teacher,
or HET, a new teacher assessment instrument that can strengthen current approaches to
faculty development by making them more responsive to teachers’ local contexts. It also
proposes a pilot study that will clarify the differences between this new instrument and the
Introductory Composition at Purdue (ICaP) SET, a more traditional instrument used for similar
purposes. The results of this study will direct future efforts to refine the proposed instrument.
6
Methods section, which follows, will propose a pilot study that compares the results of the
proposed instrument to the results of a traditional SET (and will also provide necessary
background information on both of these evaluations). The paper will conclude with a discussion
of how the results of the pilot study will inform future iterations of the proposed instrument and,
more broadly, how universities should argue for local development of assessments.
Literature Review
Effective Teaching: A Contextual Construct
The validity of the instrument this paper proposes is contingent on the idea that it is
possible to systematically measure a teacher’s ability to teach. Indeed, the same could be said
for virtually all teacher evaluations. Yet despite the exceeding commonness of SETs and the
faculty development programs that depend on their input, there is little scholarly consensus on
precisely what constitutes “good” or “effective” teaching. It would be impossible to review the
entire history of the debate surrounding teaching effectiveness, owing to its sheer scope—such
a summary might need to begin with, for instance, Cicero and Quintilian. However, a cursory
overview of important recent developments (particularly those revealed in meta-analyses of
empirical studies of teaching) can help situate the instrument this paper proposes in relevant
academic conversations.
Meta-analysis 1. One core assumption that undergirds many of these conversations is
the notion that good teaching has effects that can be observed in terms of student achievement.
A meta-analysis of 167 empirical studies that investigated the effects of various teaching factors
on student achievement (Kyriakides et al., 2013) supported the effectiveness of a set of
teaching factors that the authors group together under the label of the “dynamic model” of
teaching. Seven of the eight factors (Orientation, Structuring, Modeling, Questioning,
Assessment, Time Management, and Classroom as Learning Environment) corresponded to
moderate average effect sizes (of between 0.34–0.41 standard deviations) in measures of
7
student achievement. The eighth factor, Application (defined as seatwork and small-group tasks
oriented toward practice of course concepts), corresponded to only a small yet still significant
effect size of 0.18. The lack of any single decisive factor in the meta-analysis supports the idea
that effective teaching is likely a multivariate construct. However, the authors also note the
context-dependent nature of effective teaching. Application, the least-important teaching factor
overall, proved more important in studies examining young students (p. 148). Modeling, by
contrast, was especially important for older students.
Meta-analysis 2. A different meta-analysis that argues for the importance of factors like
clarity and setting challenging goals (Hattie, 2009) nevertheless also finds that the effect sizes
of various teaching factors can be highly context-dependent. For example, effect sizes for
homework range from 0.15 (a small effect) to 0.64 (a moderately large effect) based on the level
of education examined. Similar ranges are observed for differences in academic subject (e.g.,
math vs. English) and student ability level. As Snook et al. (2009) note in their critical response
to Hattie, while it is possible to produce a figure for the average effect size of a particular
teaching factor, such averages obscure the importance of context.
Meta-analysis 3. A final meta-analysis (Seidel & Shavelson, 2007) found generally
small average effect sizes for most teaching factors—organization and academic domain-
specific learning activities showed the biggest cognitive effects (0.33 and 0.25, respectively).
Here, again, however, effectiveness varied considerably due to contextual factors like domain of
study and level of education in ways that average effect sizes do not indicate.
These pieces of evidence suggest that there are multiple teaching factors that produce
measurable gains in student achievement and that the relative importance of individual factors
can be highly dependent on contextual factors like student identity. This is in line with a well-
documented phenomenon in educational research that complicates attempts to measure
teaching effectiveness purely in terms of student achievement. This is that “the largest source of
variation in student learning is attributable to differences in what students bring to school – their
8
abilities and attitudes, and family and community” (McKenzie et al., 2005, p. 2). Student
achievement varies greatly due to non-teacher factors like socio-economic status and home life
(Snook et al., 2009). This means that, even to the extent that it is possible to observe the
effectiveness of certain teaching behaviors in terms of student achievement, it is difficult to set
generalizable benchmarks or standards for student achievement. Thus is it also difficult to make
true apples-to-apples comparisons about teaching effectiveness between different educational
contexts: due to vast differences between different kinds of students, a notion of what
constitutes highly effective teaching in one context may not in another. This difficulty has
featured in criticism of certain meta-analyses that have purported to make generalizable claims
about what teaching factors produce the biggest effects (Hattie, 2009). A variety of other
commentators have also made similar claims about the importance of contextual factors in
teaching effectiveness for decades (see, e.g., Bloom et al., 1956; Cashin, 1990; Theall, 2017).
The studies described above mainly measure teaching effectiveness in terms of
academic achievement. It should certainly be noted that these quantifiable measures are not
generally regarded as the only outcomes of effective teaching worth pursuing. Qualitative
outcomes like increased affinity for learning and greater sense of self-efficacy are also important
learning goals. Here, also, local context plays a large role.
SETs: Imperfect Measures of Teaching
As noted in this paper’s introduction, SETs are commonly used to assess teaching
performance and inform faculty development efforts. Typically, these take the form of an end-of-
term summative evaluation comprised of multiple-choice questions (MCQs) that allow students
to rate statements about their teachers on Likert scales. These are often accompanied with
short-answer responses which may or may not be optional.
SETs serve important institutional purposes. While commentators have noted that there
are crucial aspects of instruction that students are not equipped to judge (Benton & Young,
2018), SETs nevertheless give students a rare institutional voice. They represent an opportunity
9
to offer anonymous feedback on their teaching experience and potentially address what they
deem to be their teacher’s successes or failures. Students are also uniquely positioned to offer
meaningful feedback on an instructors’ teaching because they typically have much more
extensive firsthand experience of it than any other educational stakeholder. Even peer
observers only witness a small fraction of the instructional sessions during a given semester.
Students with perfect attendance, by contrast, witness all of them. Thus, in a certain sense, a
student can theoretically assess a teacher’s ability more authoritatively than even peer mentors
can.
While historical attempts to validate SETs have produced mixed results, some studies
have demonstrated their promise. Howard (1985), for instance, finds that SET are significantly
more predictive of teaching effectiveness than self-report, peer, and trained-observer
assessments. A review of several decades of literature on teaching evaluations (Watchel, 1998)
found that a majority of researchers believe SETs to be generally valid and reliable, despite
occasional misgivings. This review notes that even scholars who support SETs frequently argue
that they alone cannot direct efforts to improve teaching and that multiple avenues of feedback
are necessary (L’hommedieu et al., 1990; Seldin, 1993).
Finally, SETs also serve purposes secondary to the ostensible goal of improving
instruction that nonetheless matter. They can be used to bolster faculty CVs and assign
departmental awards, for instance. SETs can also provide valuable information unrelated to
teaching. It would be hard to argue that it not is useful for a teacher to learn, for example, that a
student finds the class unbearably boring, or that a student finds the teacher’s personality so
unpleasant as to hinder her learning. In short, there is real value in understanding students’
affective experience of a particular class, even in cases when that value does not necessarily
lend itself to firm conclusions about the teacher’s professional abilities.
However, a wealth of scholarly research has demonstrated that SETs are prone to fail in
certain contexts. A common criticism is that SETs can frequently be confounded by factors
10
external to the teaching construct. The best introduction to the research that serves as the basis
for this claim is probably Neath (1996), who performs something of a meta-analysis by
presenting these external confounds in the form of twenty sarcastic suggestions to teaching
faculty. Among these are the instructions to “grade leniently,” “administer ratings before tests”
(p. 1365), and “not teach required courses” (#11) (p. 1367). Most of Neath’s advice reflects an
overriding observation that teaching evaluations tend to document students’ affective feelings
toward a class, rather than their teachers’ abilities, even when the evaluations explicitly ask
students to judge the latter.
Beyond Neath, much of the available research paints a similar picture. For example, a
study of over 30,000 economics students concluded that “the poorer the student considered his
teacher to be [on an SET], the more economics he understood” (Attiyeh & Lumsden, 1972). A
1998 meta-analysis argued that “there is no evidence that the use of teacher ratings improves
learning in the long run” (Armstrong, 1998, p. 1223). A 2010 National Bureau of Economic
Research study found that high SET scores for a course’s instructor correlated with “high
contemporaneous course achievement,” but “low follow-on achievement” (in other words, the
students would tend to do well in the course, but poor in future courses in the same field of
study. Others observing this effect have suggested SETs reward a pandering, “soft-ball”
teaching style in the initial course (Carrell & West, 2010). More recent research suggests that
course topic can have a significant effect on SET scores as well: teachers of “quantitative
courses” (i.e., math-focused classes) tend to receive lower evaluations from students than their
humanities peers (Uttl & Smibert, 2017).
Several modern SET studies have also demonstrated bias on the basis of gender
(Anderson & Miller, 1997; Basow, 1995), physical appearance/sexiness (Ambady & Rosenthal,
1993), and other identity markers that do not affect teaching quality. Gender, in particular, has
attracted significant attention. One recent study examined two online classes: one in which
instructors identified themselves to students as male, and another in which they identified as
11
female (regardless of the instructor’s actual gender) (Macnell et al., 2015). The classes were
identical in structure and content, and the instructors’ true identities were concealed from
students. The study found that students rated the male identity higher on average. However, a
few studies have demonstrated the reverse of the gender bias mentioned above (that is, women
received higher scores) (Bachen et al., 1999) while others have registered no gender bias one
way or another (Centra & Gaubatz, 2000).
The goal of presenting these criticisms is not necessarily to diminish the institutional
importance of SETs. Of course, insofar as institutions value the instruction of their students, it is
important that those students have some say in the content and character of that instruction.
Rather, the goal here is simply to demonstrate that using SETs for faculty development
purposes—much less for personnel decisions—can present problems. It is also to make the
case that, despite the abundance of literature on SETs, there is still plenty of room for scholarly
attempts to make these instruments more useful.
Empirical Scales and Locally-Relevant Evaluation
One way to ensure that teaching assessments are more responsive to the demands of
teachers’ local contexts is to develop those assessments locally, ideally via a process that
involves the input of a variety of local stakeholders. Here, writing assessment literature offers a
promising path forward: empirical scale development, the process of structuring and calibrating
instruments in response to local input and data (e.g., in the context of writing assessment,
student writing samples and performance information). This practice contrasts, for instance, with
deductive approaches to scale development that attempt to represent predetermined theoretical
constructs so that results can be generalized.
Supporters of the empirical process argue that empirical scales have several
advantages. They are frequently posited as potential solutions to well-documented reliability and
validity issues that can occur with theoretical or intuitive scale development (Brindley, 1998;
Turner & Upshur, 1995, 2002). Empirical scales can also help researchers avoid issues caused
12
by subjective or vaguely-worded standards in other kinds of scales (Brindley, 1998) because
they require buy-in from local stakeholders who must agree on these standards based on
their understanding of the local context. Fulcher et al. (2011) note the following, for instance:
Measurement-driven scales suffer from descriptional inadequacy. They are not sensitive
to the communicative context or the interactional complexities of language use. The level
of abstraction is too great, creating a gulf between the score and its meaning. Only with
a richer description of contextually based performance, can we strengthen the meaning
of the score, and hence the validity of score-based inferences. (pp. 8–9)
There is also some evidence that the branching structure of the EBB scale specifically
can allow for more reliable and valid assessments, even if it is typically easier to calibrate and
use conventional scales (Hirai & Koizumi, 2013). Finally, scholars have also argued that
theory-based approaches to scale development do not always result in instruments that
realistically capture ordinary classroom situations (Knoch, 2007, 2009).
The most prevalent criticism of empirical scale development in the literature is that the
local, contingent nature of empirical scales basically discards any notion of their results’
generalizability. Fulcher (2003), for instance, makes this basic criticism of the EBB scale even
as he subsequently argues that “the explicitness of the design methodology for EBBs is
impressive, and their usefulness in pedagogic settings is attractive” (p. 107). In the context of
this particular paper’s aims, there is also the fact that the literature supporting empirical scale
development originates in the field of writing assessment, rather than teaching assessment.
Moreover, there is little extant research into the applications of empirical scale development for
the latter purpose. Thus, there is no guarantee that the benefits of empirical development
approaches can be realized in the realm of teaching assessment. There is also no guarantee
that they cannot. In taking a tentative step towards a better understanding of how these
assessment schema function in a new context, then, the study described in the next section
13
asks whether the principles that guide some of the most promising practices for assessing
students cannot be put to productive use in assessing teachers.
Materials and Methods
This section proposes a pilot study that will compare the ICaP SET to the Heavilon
Evaluation of Teacher (HET), an instrument designed to combat the statistical ceiling effect
described above. In this section, the format and composition of the HET is described, with
special attention paid to its branching scale design. Following this, the procedure for the study is
outlined, and planned interpretations of the data are discussed.
The Purdue ICaP SET
The SET employed by Introductory Composition at Purdue (ICaP) program as of
January 2019 serves as an example of many of the prevailing trends in current SET
administration. The evaluation is administered digitally: ICaP students receive an invitation to
complete the evaluation via email near the end of the semester, and must complete it before
finals week (i.e., the week that follows the normal sixteen-week term) for their responses to be
counted. The evaluation is entirely optional: teachers may not require their students to complete
it, nor may they offer incentives like extra credit as motivation. However, some instructors opt to
devote a small amount of in-class time for the evaluations. In these cases, it is common practice
for instructors to leave the room so as not to coerce high scores.
The ICaP SET mostly takes the form of a simple multiple-choice survey. Thirty-four
MCQs appear on the survey. Of these, the first four relate to demographics: students must
indicate their year of instruction, their expected grade, their area of study, and whether they are
taking the course as a requirement or as an elective. Following these are two questions related
to the overall quality of the course and the instructor (students must rate each from “very poor”
to “excellent” on a five-point scale). These are “university core” questions that must appear on
every SET administered at Purdue, regardless of school, major, or course. The Students are
14
also invited to respond to two short-answer prompts: “What specific suggestions do you have for
improving the course or the way it is taught?” and “what is something that the professor does
well?” Responses to these questions are optional.
The remainder of the MCQs (thirty in total) are chosen from a list of 646 possible
questions provided by the Purdue Instructor Course Evaluation Service (PICES) by department
administrators. Each of these PICES questions requires students to respond to a statement
about the course on a five-point Likert scale. Likert scales are simple scales used to indicate
degrees of agreement. In the case of the ICaP SET, students must indicate whether they
strongly agree, agree, disagree, strongly disagree, or are undecided. These thirty Likert scale
questions assess a wide variety of the course and instructor’s qualities. Examples include “My
instructor seems well-prepared for class,” “This course helps me analyze my own and other
students’ writing,” and “When I have a question or comment I know it will be respected,” for
example.
One important consequence of the ICaP SET within the Purdue English department is
the Excellence in Teaching Award (which, prior to Fall 2018, was named the Quintilian or,
colloquially, “Q” Award). This is a symbolic prize given every semester to graduate instructors
who score highly on their evaluations. According to the ICaP site, “ICaP instructors whose
teaching evaluations achieve a certain threshold earn [the award], recognizing the top 10% of
teaching evaluations at Purdue.” While this description is misleading—the award actually goes
to instructors whose SET scores rank in the top decile in the range of possible outcomes, but
not necessarily ones who scored better than 90% of other instructors—the award nevertheless
provides an opportunity for departmental instructors to distinguish their CVs and teaching
portfolios.
Insofar as it is distributed digitally, it is composed of MCQs (plus a few short-answer
responses), and it is intended as end-of-term summative assessment, the ICaP SET embodies
15
the current prevailing trends in university-level SET administration. In this pilot study, it serves
as a stand-in for current SET administration practices (as generally conceived).
The HET
Like the ICaP SET, the HET uses student responses to questions to produce a score
that purports to represent their teacher’s pedagogical ability. It has a similar number of items
(28, as opposed to the ICaP SET’s 34). However, despite these superficial similarities, the
instrument’s structure and content differ substantially from the ICaP SET’s.
The most notable differences are the construction of the items on the text and the way
that responses to these items determine the teacher’s final score. Items on the HET do not use
the typical Likert scale, but instead prompt students to respond to a question with a simple
“yes/no” binary choice. By answering “yes” and “no” to these questions, student responders
navigate a branching “tree” map of possibilities whose endpoints correspond to points on a 33-
point ordinal scale.
The items on the HET are grouped into six suites according to their relevance to six
different aspects of the teaching construct (described below). The suites of questions
correspond to directional nodes on the scale—branching paths where an instructor can move
either “up” or “down” based on the student’s responses. If a student awards a set number of
“yes” responses to questions in a given suite (signifying a positive perception of the instructor’s
teaching), the instructor moves up on the scale. If a student does not award enough “yes”
responses, the instructor moves down. Thus, after the student has answered all of the
questions, the instructor’s “end position” on the branching tree of possibilities corresponds to a
point on the 33-point scale. A visualization of this structure is presented in Table 1.
16
Figure 1
Illustration of HET’s Branching Structure
Note. Each node in this diagram corresponds to a suite of HET/ICALT items, rather than to a single item.
The questions on the HET derive from the International Comparative Analysis of
Learning and Teaching (ICALT), an instrument that measures observable teaching behaviors for
17
the purpose of international pedagogical research within the European Union. The most recent
version of the ICALT contains 32 items across six topic domains that correspond to six broad
teaching skills. For each item, students rate a statement about the teacher on a four-point Likert
scale. The main advantage of using ICALT items in the HET is that they have been
independently tested for reliability and validity numerous times over 17 years of development
(see, e.g., Van de Grift, 2007). Thus, their results lend themselves to meaningful comparisons
between teachers (as well as providing administrators a reasonable level of confidence in their
ability to model the teaching construct itself).
The six “suites” of questions on the HET, which correspond to the six topic domains on
the ICALT, are presented in Table 1.
Table 1
HET Question Suites
Suite # of Items Description
Safe learning environment 4 Whether the teacher is able to
maintain positive, nonthreatening
relationships with students (and to
foster these sorts of relationships
among students).
Classroom management 4 Whether the teacher is able to
maintain an orderly, predictable
environment.
Clear instruction 7 Whether the teacher is able to
explain class topics
comprehensibly, provide clear sets
of goals for assignments, and
articulate the connections between
the assignments and the class
topics in helpful ways.
18
Suite # of Items Description
Activating teaching methods 7 Whether the teacher uses strategies
that motivate students to think about
the class’s topics.
Learning strategies 6 Whether teachers take explicit steps
to teach students how to learn (as
opposed to merely providing
students informational content).
Differentiation 4 Whether teachers can successfully
adjust their behavior to meet the
diverse learning needs of individual
students.
Note. Item numbers are derived from original ICALT item suites.
The items on the HET are modified from the ICALT items only insofar as they are phrased
as binary choices, rather than as invitations to rate the teacher. Usually, this means the addition
of the word “does” and a question mark at the end of the sentence. For example, the second
safe learning climate item on the ICALT is presented as “The teacher maintains a relaxed
atmosphere.” On the HET, this item is rephrased as, “Does the teacher maintain a relaxed
atmosphere?” See Appendix for additional sample items.
As will be discussed below, the ordering of item suites plays a decisive role in the teacher’s
final score because the branching scale rates earlier suites more powerfully. So too does the
“sensitivity” of each suite of items (i.e., the number of positive responses required to progress
upward at each branching node). This means that it is important for local stakeholders to
participate in the development of the scale. In other words, these stakeholders must be involved
in decisions about how to order the item suites and adjust the sensitivity of each node. This is
described in more detail below.
Once the scale has been developed, the assessment has been administered, and the
teacher’s endpoint score has been obtained, the student rater is prompted to offer any textual
19
feedback that s/he feels summarizes the course experience, good or bad. Like the short
response items in the ICaP SET, this item is optional. The short-response item is as follows:
• What would you say about this instructor, good or bad, to another student considering
taking this course?
The final four items are demographic questions. For these, students indicate their grade
level, their expected grade for the course, their school/college (e.g., College of Liberal Arts,
School of Agriculture, etc.), and whether they are taking the course as an elective or as a
degree requirement. These questions are identical to the demographic items on the ICaP SET.
To summarize, the items on the HET are presented as follows:
• Branching binary questions (32 different items; six branches)
o These questions provide the teacher’s numerical score
• Short response prompt (one item)
• Demographic questions (four items)
Scoring
The main data for this instrument are derived from the endpoints on a branching ordinal
scale with 33 points. Because each question is presented as a binary yes/no choice (with “yes”
suggesting a better teacher), and because paths on the branching scale are decided in terms of
whether the teacher receives all “yes” responses in a given suite, 32 possible outcomes are
possible from the first five suites of items. For example, the worst possible outcome would be
five successive “down” branches, the second-worst possible outcome would be four “down”
branches followed by an “up,” and so on. The sixth suite is a tie-breaker: instructors receive a
single additional point if they receive all “yes” responses on this suite.
By positioning certain suites of items early in the branching sequence, the HET gives
them more weight. For example, the first suite is the most important of all: an “up” here
automatically places the teacher above 16 on the scale, while a “down” precludes all scores
26
References
Ambady, N., & Rosenthal, R. (1993). Half a minute: Predicting teacher evaluations from thin
faculty in US higher ed. AAUP Updates. https://www.aaup.org/news/data-snapshot-
contingent-faculty-us-higher-ed#.Xfpdmy2ZNR4
of Educational Psychology, 87(4), 656–665. http://dx.doi.org/10.1037/0022-
0663.87.4.656
Becker, W. (2000). Teaching economics in the 21st century. Journal of Economic Perspectives,
14(1), 109–120. http://dx.doi.org/10.1257/jep.14.1.109
Benton, S., & Young, S. (2018). Best practices in the evaluation of teaching. Idea paper, 69.
American Association of University Professors. (2018, October 11). Data snapshot: Contingent
Anderson, K., & Miller, E. D. (1997). Gender and student evaluations of teaching. PS: Political
Science and Politics, 30(2), 216–219. https://doi.org/10.2307/420499
Armstrong, J. S. (1998). Are student ratings of instruction useful? American Psychologist,
53(11), 1223–1224. http://dx.doi.org/10.1037/0003-066X.53.11.1223
Attiyeh, R., & Lumsden, K. G. (1972). Some modern myths in teaching economics: The U.K.
experience. American Economic Review, 62(1), 429–443.
https://www.jstor.org/stable/1821578
Bachen, C. M., McLoughlin, M. M., & Garcia, S. S. (1999). Assessing the role of gender in
college students’ evaluations of faculty. Communication Education, 48(3), 193–210.
http://doi.org/cqcgsr
Basow, S. A. (1995). Student evaluations of college professors: When gender matters. Journal
slices of nonverbal behavior and physical attractiveness. Journal of Personality and
Social Psychology, 64(3), 431–441. http://dx.doi.org/10.1037/0022-3514.64.3.431
American Association of University Professors. (n.d.) Background facts on contingent faculty
positions. https://www.aaup.org/issues/contingency/background-facts
27
Berk, R. A. (2005). Survey of 12 strategies to measure teaching effectiveness. International
Bloom, B. S., Englehart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of
educational objectives: The classification of educational goals. Addison-Wesley
Longman Ltd.
Brandenburg, D., Slinde, C., & Batista, J. (1977). Student ratings of instruction: Validity and
normative interpretations. Research in Higher Education, 7(1), 67–78.
http://dx.doi.org/10.1007/BF00991945
Carrell, S., & West, J. (2010). Does professor quality matter? Evidence from random
assignment of students to professors. Journal of Political Economy, 118(3), 409–432.
https://doi.org/10.1086/653808
Cashin, W. E. (1990). Students do rate different academic fields differently. In M. Theall, & J. L.
Franklin (Eds.), Student ratings of instruction: Issues for improving practice. New
Directions for Teaching and Learning (pp. 113–121).
Centra, J., & Gaubatz, N. (2000). Is there gender bias in student evaluations of
teaching? The Journal of Higher Education, 71(1), 17–33.
https://doi.org/10.1080/00221546.2000.11780814
Davis, B. G. (2009). Tools for teaching (2nd ed.). Jossey-Bass.
Denton, D. (2013). Responding to edTPA: Transforming practice or applying
shortcuts? AILACTE Journal, 10(1), 19–36.
Dizney, H., & Brickell, J. (1984). Effects of administrative scheduling and directions upon
student ratings of instruction. Contemporary Educational Psychology, 9(1), 1–7.
https://doi.org/10.1016/0361-476X(84)90001-8
DuCette, J., & Kenney, J. (1982). Do grading standards affect student evaluations of teaching?
Some new evidence on an old question. Journal of Educational Psychology, 74(3), 308–
314. https://doi.org/10.1037/0022-0663.74.3.308
Journal of Teaching and Learning in Higher Education, 17(1), 48–62.
28
Edwards, J. E., & Waters, L. K. (1984). Halo and leniency control in ratings as influenced by
format, training, and rater characteristic differences. Managerial Psychology, 5(1), 1–16.
Fink, L. D. (2013). The current status of faculty development internationally. International
Journal for the Scholarship of Teaching and Learning, 7(2).
https://doi.org/10.20429/ijsotl.2013.070204
Fulcher, G. (2003). Testing second language speaking. Pearson Education.
Fulcher, G., Davidson, F., & Kemp, J. (2011). Effective rating scale development for speaking
tests: Performance decision trees. Language Testing, 28(1), 5–29.
https://doi.org/10.1177/0265532209359514
Gaff, J. G., & Simpson, R. D. (1994). Faculty development in the United States. Innovative
Higher Education, 18(3), 167–76. https://doi.org/10.1007/BF01191111
Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to
achievement. Routledge.
Hoffman, R. A. (1983). Grade inflation and student evaluations of college courses. Educational
and Psychological Research, 3(3), 51–160. https://doi.org/10.1023/A:101557981
Howard, G., Conway, C., & Maxwell, S. (1985). Construct validity of measures of college
teaching effectiveness. Journal of Educational Psychology, 77(2), 187–96.
Kane, M. T. (2013) Validating interpretations and uses of test scores. Journal of Educational
Measurement, 50(1), 1–73.
Kelley, T. (1927) Interpretation of educational measurements. World Book Co.
Knoch, U. (2007). Do empirically developed rating scales function differently to conventional
rating scales for academic writing? Spaan Fellow Working Papers in Second or Foreign
Language Assessment, 5, 1–36. English Language Institute, University of Michigan.
Knoch, U. (2009). Diagnostic assessment of writing: A comparison of two rating
scales. Language Testing, 26(2), 275-304.
http://dx.doi.org/10.1037/0022-0663.77.2.187
32
Appendix
Sample ICALT Items Rephrased for HET
Suite Sample ICALT Item HET Phrasing
Safe learning environment The teacher promotes mutual
respect.
Does the teacher promote mutual
respect?
Classroom management The teacher uses learning time
efficiently.
Does the teacher use learning time
efficiently?
Clear instruction The teacher gives feedback to
pupils.
Does the teacher give feedback to
pupils?
Activating teaching methods The teacher provides interactive
instruction and activities.
Does the teacher provide interactive
instruction and activities?
Learning strategies The teacher provides interactive
instruction and activities.
Does the teacher provide interactive
instruction and activities?
Differentiation The teacher adapts the instruction
to the relevant differences between
pupils.
Does the teacher adapt the
instruction to the relevant
differences between pupils?
