Loading...

Follow JETLaw | Vanderbilt Journal Of Entertainment & .. on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Recently, a FaceTime bug was discovered on Apple’s iPhone. The bug allowed people to hear the audio of the person they were calling before they picked up. The problem occurred when users of iPhones running version 12.1 of Apple’s iOS operating system used the group calling function of FaceTime. It was also found to hit Mac users who were called from an iPhone. As explained by USA Today, individuals wanting to record audio of another person could call on FaceTime, swipe up to add another person, and simply type in their own phone number. FaceTime then automatically answered for the person called and an audio recording started. Then, if the person that was called pressed the power button from the lock screen, video recording began.

After the bug was discovered, Apple pledged to fix the bug in addition to issuing a statement expressing that the company is “aware of this issue, and we have identified a fix that will be released in a software update later this week.” Apple has since claimed that the bug has been fixed and is no longer an issue. Per Apple, a software update will be released next week. The new update will allow owners of iPhones, iPads and Mac computers to re-enable Group FaceTime, the feature that had allowed the potential eavesdropping.

While the bug may be fixed, a recent lawsuit claims that the FaceTime bug allowed unwanted recording of a deposition and in doing so caused emotional trauma. While the plaintiff, Larry Williams II, does not provide details on the contents of the deposition, the suit claims that the bug caused “permanent and continuous injuries, pain, suffering and emotional trauma.” And, because of this, Williams has “a diminished capacity for the enjoyment of life.” The suit seeks damages for negligence, design defect, failure to warn, breach of express and implied warranties, fraudulent and negligent misrepresentation, fraudulent concealment, and unjust enrichment.

–Ben Breckler

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In 2019, very few doubt that Amazon or Facebook have much to offer us. Yet these companies face increasing criticism for failure to be better corporate citizens, to take more responsibility for the impact that they have on American life. Mark Zuckerberg and Jeff Bezos are our Rockefeller and Carnegie, with the same polarizing effect: robber barons to some, captains of industry to others.

Beyond these broad criticisms, both Facebook and Amazon have faced specific allegations of furthering racism through their software.

In 2017, it was discovered that Facebook’s ad targeting system offered to sell ads targeting users who self-identified as anti-semitic. Although Facebook did not design its algorithm to further hate in this way, it had few preventative measures in place and little human oversight to prevent this episode.

A group of Facebook users actually filed a class-action lawsuit in 2017, Onuoah v. Facebook, alleging that the company violated Title VII of the 1964 Civil Rights Act by enabling race-based ad targeting. Plaintiffs here allege “that Facebook violates these and other federal and state laws (1) by intentionally discriminating against people of color by excluding them from receiving Facebook’s recruitment, marketing, and advertising of employment, housing, and credit opportunities expressly based on their race or national origin, and (2) by applying facially neutral criteria–like the zip code a person lives in or his or her similarity to a business’ existing customers–that disproportionately exclude people of color from receiving ads, especially in racially segregated areas”

Facebook has since changed its ad policy, no longer allowing ad buyers to selectively exclude certain racial groups from viewing some of their ads.  This is a rare,  at least partially successful example of individual users joining together to force Facebook to change its policy through legal action. The case itself is still ongoing in the Northern District of California.

Despite a myriad of reports that Facebook is either actively profiting from ads which exploit racism or failing to combat racist posts, the company has struggled to respond.

One of Facebook’s strategies to combat hate speech and racism on its platform is through human content moderators, who view thousands of posts each week while looking to weed out hate speech and racist posts which algorithms cannot detect. Yet even this strategy has its flaws, one of its content moderator brought a suit against the company in September 2018, alleging that the “videos, images and livestreamed broadcasts of child sexual abuse, rape, torture, bestiality, beheadings, suicide and murder,” led to PTSD. Facebook relies on just 7,500 moderators to view as many as 10 million potentially inappropriate posts each week.

While Facebook keeps many of the details of its algorithm, ad-targeting system, and moderation system private, one of the most effective ways to hold the company accountable thus far has been to bring a civil rights suit where outright discrimination can be shown. Additional governmental regulation may be necessary to ensure that Facebook better controls hate-speech and racism, but small groups of users can band together and bring suit where individual discriminatory actions can be shown. The example of Onuoah v. Facebook, while not yet entirely successful, is replicable for attacking other instances of racism in big tech companies.  Just like in society, eliminating racism in tech altogether with a single stroke may be impossible, but dedicated citizens fighting small parts of the greater whole can lead to major gains in removing the pervasive influence of racism in our tech giants.

–Justin Allen

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The first piece of AI-generated art sold at auction for $432,500 in 2018. The art collective responsible for the piece, titled “Portrait of Edmond Belamy,” originally marketed it as “created by artificial intelligence,” although it later walked that back and regretted giving “all the credit to the machine.”

The collective, Obvious, signed the portrait with the mathematical equation that was used to create it. The portrait, and others by the collective, were generated using a machine-learning algorithm known as the Generative Adversarial Network (GAN). GAN was developed by Ian Goodfellow in 2014, and Obvious used it to produce images “based on a data set of 15,000 portraits painted between the 14th and 20th centuries.” Robbie Barrat, a researcher at Stanford’s AI research lab, trained GAN to produce original lyrics, landscapes, and portraits, and shared his code online. (He also created the data scraper that Obvious used to pull images from Wikiart.) Obvious’s code is “borrowed heavily” from Barrat’s, and results from the two models are “suspiciously close.” Obvious agrees that “there is not a big percentage [of Barrat’s code] that has been modified,” but asserts that they put a significant amount of time and effort into creating the artwork.

The question of who owns the art that is developed with AI is a complicated one. AI is a tool for artists to use, just as they would any other tool. But there are different ways that people contribute to and work with the tool. There may be different ownership of: the inputs, the learning algorithm, the trained algorithm, and the outputs (the actual “art” being sold).

The inputs that AI artists tend to use are images or works in the public domain, as a pragmatic measure to avoid copyright infringement. The question of whether this would count as fair use has not been litigated. Some artists, like Anna Ridler, input their own photographs and drawings into data sets. Ridler thinks of the construction of the dataset itself, through deciding which images to include and which to exclude, as “a creative act and very much part of the piece.”

Barrat shared his code, but could have sold the code itself. The images that can be used in a dataset could themselves be copyrighted. If Ridler is correct, anyone who created a set of materials would have created something that they have a propriety interest in. Artists using AI for their own work can ensure that they own that work if they are using open source code and images and data sets that they created or that are within the public domain.

There is separate criticism about the quality and artistic merit of the art on its own, from technical shortcomings to aesthetic value to what it means to be creative. But regardless of its artistic merit, the art being generated by AI-artists has captivated people because of the tool that is used to produce it, and more questions about who gets credit for this art is likely to continue

–Amber Banks

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Enacted in 1968, the Fair Housing Act (“FHA”) has been around for more than half a century. The landmark civil rights act prohibits discrimination on the basis of race, color, religion, sex, familial status, or national origin in the housing market. Furthermore, the FHA contains a broad prohibition on the publication of discriminatory housing advertisements. Importantly, the FHA’s goal of eliminating discrimination in the housing market is far from complete, and this failure can be largely attributed to the rise of the Internet and subsequent discrimination.

Early last year Facebook was sued in federal court by four nonprofit organizations aimed at eliminating housing discrimination. The crux of the complaint is that Facebook violates the FHA by “continu[ing] to enable landlords and real estate brokers to bar families with children, women, and others from receiving rental and sales ads for housing.” This complaint follows in the wake of a 2016 ProPublica study finding that Facebook permits advertisers to exclude users with assigned “ethnic affinities” from seeing housing advertisements. Specifically, ProPublica researchers found they were able to use Facebook’s advertising platform to curate ads excluding anyone with a designated “affinity” as African-American, Asian-American or Hispanic.

The most recent development in this lawsuit against Facebook is the federal government’s involvement. The Department of Housing and Urban Development filed a statement of interest, alleging that Facebook allows advertisers to utilize its platform to engage in technologically enhanced redlining.

Although recognized primarily as a social media platform, Facebook occupies a robust space in the advertising world. In 2017 alone Facebook generated $40 billion in advertising revenue. According to Facebook, its greatest advertising asset is its user base of over 2 billion people. Imposing liability on Facebook itself, in addition to the users utilizing the platform to publish discriminatory advertisements, could have resounding effects on online communities.

The lawsuit, however, is greatly complicated by the Communications Decency Act (“CDA”). The CDA contains an immunity provision (Section 230) that essentially blocks lawsuits against online service providers. CDA immunity arose from congressional concerns that imposition of liability in the online arena would stifle innovation. Recent caselaw indicates reluctance on behalf of the federal courts to impose liability on online service providers.

Accordingly, the CDA and FHA are at odds with each other. Section 230 of the CDA limits liability online, while the FHA forbids discrimination in the housing market and broadly prohibits publication of discriminatory advertisements. In the event this lawsuit proceeds to trial, the court will be faced with a complicated task—navigating the murky waters between the FHA and the CDA. Only time will tell which Act will triumph.

–Alexandra Bakalar

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

USA Gymnastics has been in hot water for the past two years, starting with the revelation that longtime team doctor Larry Nassar had been abusing young female athletes for decades. In November 2017, Nassar pled guilty to seven counts of first-degree criminal sexual conduct. While Nassar has since been sentenced, the scandal revealed serious organizational issues within USA Gymnastics. Over 350 women came forward, shedding light on decades of abuse, which begs the question, how could abuse of athletes competing at the highest level go on for this long?

In the past year, USA Gymnastics has attempted to repair some of the damage by clearing house and hiring a new interim CEO, Mary Bono. However, Bono’s tenure was short lived. She resigned three weeks later, after the discovery of a past racist tweet of hers. The United States Olympic Committee (USOC) has now taken steps to completely revoke USA Gymnastics’ status as the National Governing Body of the sport. In a statement released on November 5, the USOC explained the decision by saying, “we believe the challenges facing the organization are simply more than it is capable of overcoming in its current form.” An astute observation, USOC. The problem is, will this decision hurt the very athletes it is trying to do right by?

The USOC apparently made the decision with the support of gymnasts such as Olympian Simon Biles. Athletes deserve the support from the USOC, and it’s good to see they are finally getting it. However, it’s important that the USOC go about this process in a way that will not interfere with the goals of gymnasts—after all, 2020 is not far away. Once they have gone through the formal legal proceedings of stripping USA Gymnastics of its status as National Governing Body, the USOC says it has both a short and long term plan for moving forward, with the ultimate goal being an organization that can “lead gymnastics in the United States and rebuild a supportive community of athletes and clubs that can carry the sport forward for decades to come.” Hopefully, the USOC—and gymnastics programs around the country—will learn from the mistakes they have made. We need more accountability and a more robust mechanism for protecting athletes from this kind of abuse happening ever again. For now, this seems like a step in the right direction.

–Courtney Tibbetts

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

These days, it seems like the media is constantly publishing stories about the imminent arrival of self-driving vehicles and their exciting benefits to motor vehicle safety, urban planning, and productivity. A study by two UK professors published just this month predicts that driverless vehicles will even be used as cheap mobile substitutes for motels. Ever the negative Nancies, lawyers have instead focused on the question of who to blame when a self-driving vehicle gets into an accident and how to regulate auto manufacturers? While these are important questions, lawyers should not lose sight of what is at stake when it comes to assigning liability and regulating industries.

Blaming Manufacturers

With respect to blame, commentators agree that as vehicles become increasingly automated, the liability for accidents will shift from drivers to manufacturers. In the early stages of automation, manufacturers will instruct drivers to maintain a lookout in case the automated system encounters difficult driving conditions (e.g., inclement weather, unpaved roads or other unusual terrain, unmapped and unfamiliar roads). So long as the technology anticipates human supervision, drivers will remain liable for accidents that result from inattentiveness or careless engagement of the self-driving feature.

Eventually, manufacturers expect to release vehicles that can drive without any human supervision. These vehicles are expected to transport passengers who are asleep, disabled, and underage; or drive with no passengers altogether. They may not even have control components like steering wheels and brake pedals.

When these vehicles encounter accidents (e.g., the vehicle hits a pedestrian while evading a sudden, falling tree), the injured will blame manufacturers. Under current products liability law, manufacturers will only be responsible for accidents that (1) they proximately cause as a result of their negligence or (2) their products proximately cause as a result of a defect. However, as the technology quickly evolves, accidents will become less attributable to negligence or defect.

Some commentators, such as the former director of the Federal Trade Commission’s Bureau of Consumer Protection, have proposed imposing strict liability for these accidents on manufacturers. They argue: because manufacturers are passing the costs of their products onto third parties (albeit reasonably), they are effectively receiving a public subsidy; therefore, in order to discourage them from overproducing dangerous vehicles, manufacturers should be forced to internalize these externalities.

Regulating Manufacturers

In October, the US Department of Transportation published version 3.0 of its automated vehicle guidance. Although the Department, under the Trump administration, aims to be hands-off and to encourage innovation, the public expects the federal government to assure equipment safety. In fact, Congress has considered a bill to grant statutory authority to the DOT and its subsidiary, the National Highway Traffic Safety Administration, to revamp regulations with driverless vehicles specifically in mind.

At the state level, twenty nine states and Washington, D.C., have already enacted related legislation and twelve more states have at least considered it. State law mostly governs the operation of vehicles—i.e., driver licensing—and local traffic laws (as opposed to design requirements, which federal law governs). For now, these state statutes, however, do relatively little; they primarily grant express permission for tech companies to test their vehicles on public roads, although commentators believe companies had the right to do so without it.

However, even these permissive statutes require some things of manufacturers. For example, in Tennessee, Vanderbilt’s home state, the Automated Vehicle Act states four requirements for operating a self-driving vehicle without a human driver: registration of self-driving vehicles with the state, insurance, capacity to achieve a “minimal risk condition” in the event the system fails, and certification by the manufacturer that the vehicle can comply with traffic laws. It also states that, if the vehicle is driving itself, the self-driving system will be considered the driver of the vehicle in determining liability under products liability, tort law, or other statutory law. The effect of this law on liability is to be determined.

What Is At Stake?

Balancing innovation with safety is a delicate act. Innovation shouldn’t be promoted for the sake of innovation if it comes at others’ expense. But encouraging self-driving technology serves a deeper purpose: it saves lives. When considering tougher regulations, policymakers must acknowledge their grave opportunity costs: adopting tough regulations means forgoing the chance to swiftly adopt a technology that could reduce, or even prevent, the loss of 30,000+ lives in vehicular accidents every year.

–Jin Yoshikawa

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Approximately a decade ago, a United States District Court in California heard O’Bannon. The case was brought by Ed O’Bannon—a former student-athlete at UCLA—on behalf of all collegiate athletes, and it challenged the National Collegiate Athletic Association’s (NCAA) restrictions on benefits student athletes were permitted to receive. O’Bannon’s argument was that the NCAA member universities were profiting on the backs of student-athletes, while the athletes were prohibited from profiting at all. Using antitrust law and policy-based arguments, the plaintiffs were able to obtain a verdict that allowed the NCAA member schools to provide student-athletes with the full cost of attendance to their respective universities: a payment that was prohibited prior. However, the NCAA’s focus on its quality of “amateurism” still restrained the argument for unbounded pay-for-play and allowed the NCAA to continue restraining the money paid to student-athletes.

In sum, what resulted from the O’Bannon case was the ability of the NCAA member universities to pay student-athletes a monthly stipend of up to $500 in addition to their scholarship. However, the $500 amount, for many student-athletes, is still not enough to cover all of their living expenses and is especially minuscule as compared to the amount that the universities receive as revenue from those sports. As such, many argue that the O’Bannon case failed to fully address the problem. Today, the fight for pay-for-play hasn’t ended. If anything, the fight has been fueled as the revenues that universities receive from collegiate sports are higher than ever before. The overwhelming majority of this massive profitability derives from the men’s basketball and football teams.

Jenkins v. NCAA is effectively an O’Bannon rematch. This time, with a slight change in strategy from the plaintiffs, partly due to the benefit of the information from the O’Bannon case that came before it. In front of the same district court judge, Judge Claudia Wilken, the argument has changed, but the plaintiff’s goals are highly motivated by the same social justice goal. The Jenkins plaintiffs argue that players’ athletic scholarships should be unbounded, and instead be dictated by market demands in the same way that coaches’ and other athletic staff’s salaries are determined. The plaintiffs in Jenkins “are pushing for deregulation that would allow regional conferences to set rules on the expenses that colleges can cover.”

To oppose the plaintiff’s assertions, the NCAA will need to persuade the court that the current NCAA restrictions imposed on scholarship amounts “promote competition more than they harm it in the market for student-athletes’ athletic services.” Repercussions of this change include providing players with the autonomy and bargaining power that they feel they lack currently, adding another level of competition between schools and conferences, and opening the door for potential cases of collusion among other possible legal violations. But for the plaintiffs, this is a welcome change and the risks are justified by the rewards.

As trial is still ongoing, it is quite possible that the outcome could go many directions and vary greatly based on whose arguments are most persuasive and most well-reasoned. Moreover, it can be expected that the losing party will appeal Judge Wilken’s decision. However, the prospect of effective free agency is interesting to consider. Should Judge Wilken’s final judgment include free agency and deregulation by the NCAA, the outcome could be that the money that member universities would need to pay players would come from the surplusage they currently possess, resulting in insufficient funds to also support otherwise unprofitable sports. As such, many collegiate sports would be eliminated entirely, particularly those known for being Olympic sports. No matter the final judgment of Jenkins, which is many months away, it is interesting to consider what the United States’ collegiate athletic landscape could look like in the event of total upheaval of the current system.

–Bailey Vincent

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Photos of children and grandchildren used to be reserved exclusively for mantles and wallets. Family and friends would go years without seeing those distant to them, and the phrase “you were THIS big when I last saw you,” followed by a squeeze of the cheeks was a rite of passage at each family reunion. Gone are those days. Facebook and other social media sites have allowed estranged family and friends to keep up to date with those they don’t see on a daily basis. However with this benefit of interconnectivity has come an embarrassing and perhaps dangerous violation of the rights of children.

Parents commonly post information, pictures, and videos of their child on social media in order to share with relatives, brag about their child, receive advice from other parents, or simply feel that they are not alone in the struggles of parenting. This phenomenon, known as “sharenting,” has surpassed simply showing a person a photo from their wallet. Sharenting exposes the personal lives of children to countless others across social media platforms. Some parents document their child’s battle with disease. Some simply document the daily life of their child. Some parents even stand to make a lot of money from posting their children on sites such as Facebook and Instagram, as corporations will pay for product placement and endorsements. Unfortunately, parents can be incentivized to publicly compromise their child’s dignity on social media. More extreme and embarrassing content attracts more followers, which means a higher payday for the owner of the account (i.e. the parent).

In any case, parents are able to shape their child’s online identity before the child is capable of assenting to being posted on the internet, let alone understanding what the internet is. Disclosures made online often go beyond the parents’ intent. Sharenting can expose children to an arraignment of issues such as embarrassment, bullying, identity theft, and pedophilia and can follow their children into adulthood.

As these problems have manifested, a tension has emerged between a child’s right to privacy and a parent’s right to choose how to raise his or her child. Free speech also has a role to play. While many parents maintain that they have the right to narrate their child’s online story, others believe that parents not only have a responsibility in protecting children from the negative effects of online exposure, but the responsibility to ensure the child’s autonomy and broad right to privacy. Once capable of understanding online exposure, children seem to echo the latter sentiment as well. A 2016 study of 249 pairs of parents and children showed that children were “twice as likely to report that adults should not ‘overshare’ by posting information about children online without permission.”

The Children’s Online Privacy Protection Rule helps protect children from exposing their own privacy online, but allows for such exposure with parental consent. In the United States, parents are the gatekeepers of their children’s privacy and online content is seemingly regulated only by the parent’s sense of appropriateness. Is this enough?

In France and other countries, stricter laws allow grown-up kids to sue their parents for breaching their right to privacy as children. The United States could adopt such measures, but do we really expect or even want people to sue their parents? Even if the answer is “yes,” should we really allow compromising content to remain online until the child is old enough—or angry enough—to sue? Perhaps the answer lies in government oversight. However, such regulation may be too difficult to oversee and is likely to encounter First Amendment issues. Requiring the social media platforms themselves to police these matters would likely run afoul of these same issues.

For now, it seems as though parental education is the most realistic option to protect children from sharenting. Many parents do not contemplate the dangerous consequences when posting a picture of their baby on Facebook, or even that their baby has an independent right to privacy. Some have published advice for parents to follow, like not sharing publicly or not sharing a child’s location, but the effectiveness of such publications is not yet known. Whatever the solution may be, it begins with an understanding of a parental duty to protect children from the dangers of social media and to encourage the autonomy of every person, even children. As Sigmund Freud put it, “I cannot think of any need in childhood as strong as the need for a father’s protection.”

–Jackson Smith

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The polls were overwhelmed on Tuesday, November 6 with an influx of voters, young and old, ready to make a difference and influence their local communities by voting. But as the date for 2018 midterm elections drew nearer, social media companies like Facebook, Twitter, and Snapchat were overwhelmed with another issue—how their platforms were influencing voters before election day.

The influence social media exerts over the political arena became apparent after the 2016 presidential election. Encouraging peers to vote by sharing voting information on social networks can increase voter turnout, help candidates raise campaign funds, and raise awareness about important social and political issues. However, there is a dark side to the way social media can influence politics that many of us have experienced firsthand. Fake news, misinformation, and targeted manipulation were huge issues that many social media sites were unprepared for in 2016. Many large social media platforms took action after the 2016 election, increasing their security measures, requiring more transparency for political advertisements, and improving their ability to detect fake user accounts. Despite the increased awareness, though, many of the same problems were present online in the months leading up to the 2018 midterms. Fake news circulated about candidates on multiple social media sites, and misinformation spread about the different parties’ platforms in spite of companies’ best efforts to review and assess potential inauthentic material.

Particularly concerning in the lead-up to the 2018 midterm elections was the influx of false advertisements, posts, and tweets encouraging voter suppression in one form or another. “I hear ICE agents will be at polling stations on election day,” one tweet that was taken down stated, an obvious attempt to scare immigrants out of voting. Other examples of posts meant to confuse voters on how, when and how to vote plagued social media companies in the months leading up to the midterms. In North Dakota, an ad discouraging hunters from voting was condemned by the Republican party; at the same time, another ad targeted at young male Democrats suggested that they stay at home on election day to give their women counterparts’ votes more weight. While Twitter, Facebook, and many other social media sites have been more active than ever in trying to tackle misinformation by shutting down fake accounts and taking down false advertisements, the hackers and radical voter groups responsible for these types of messages keep getting smarter.

This abuse of social media platforms has led many people to question whether or not a legal approach would help stop the spread of political misinformation. With the companies themselves equipped with technology and personnel ready to fight election manipulation, would imposing legislation either restricting the content of these sites or increasing their liability be worth the public backlash that most certainly would occur? Companies such as Facebook have already imposed content bans prohibiting posts that deter citizens from voting or spread misinformation, while Twitter has worked tirelessly to deactivate fake accounts in an attempt to thwart hackers from scaring voters away from the polls. While it is not a first amendment violation for a company such as Twitter or Facebook to police online users’ speech because they are private organizations, if the government involved itself through imposing legislative bans on content, it would be a very different story.

One protection that online users have when it comes to political discourse is in the form of Section 230 of the Communications Decency Act of 1996, a piece of legislation that protects social media companies from liability from the content posted by users on the sites. This treats online intermediaries as exempt from a host of laws they could otherwise be responsible for what users say and do. This piece of legislation is essential for sites that try to encourage potentially controversial discussions, such as Facebook and YouTube. In fact, without this type of protection, sites would be highly incentivized to censor or ban user content for fear or liability.

Monitoring social media for misinformation while balancing users’ freedom of speech and expression is no easy task. The question of how best to contain the spread of fake news and misinformation is one that we will surely keep asking ourselves as we prepare for the long road that lies ahead leading up to the 2020 election.

–Sarah Rodrigue

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Purchasable randomized rewards, commonly known as “loot boxes,” are a contentious topic in the video game industry. Although many argue that loot boxes resemble a form of unregulated gambling targeted at minors, the number of games containing loot boxes has continued to grow. Over the last two years alone, more than twenty major video game titles have contained loot boxes. Popular mobile games also frequently feature this rewards mechanism. The growing prevalence of loot boxes has many asking: should loot boxes be regulated?

A loot box is a consumable random rewards mechanism, often represented by a box, a treasure chest, or a spinning wheel. The easiest and most common way for players obtain a loot box is through an in-game purchase with real-world currency. Once a player has obtained a loot box, the player can open, or consume, the box and receive a random reward. Standard rewards include extra lives, power-ups, playable characters, character costumes, weapons, weapon skins, game modes, color schemes, and more. Although some rewards are rare or highly useful, many rewards provide little utility to players. Furthermore, the player’s odds of winning rare or powerful items, which can be extremely low, are often kept secret by the game developers.

The parallels between loot boxes and online gambling have led some consumers, parents, and policy makers to call for regulation. Both loot boxes and gambling require participants pay money to receive a reward of variable value based on chance. Moreover, loot boxes and gambling activities share many of the same addictive design features, such as a rewards reinforcement schedule and an illusion of player control. The addictive qualifies of loot boxes are especially concerning given that children have lower impulse control than adults, making them more vulnerable to gambling mechanisms and the behaviors developed through these mechanisms.

Nevertheless, game developers and other opponents of regulation contend that loot boxes are not gambling and thus should not be regulated. Those opposed to regulation note that real money cannot be won through loot boxes, that loot boxes are optional (and therefore not a core part of video games), and that some loot boxes can be obtained for free through normal game play. They also draw parallels between loot boxes and analog random rewards mechanism, such as collectible card packs and random toy dispenses, which are not regulated.

Numerous countries have recently acted to address the loot box problem. For example, in April Belgium and the Netherlands banned loot boxes as violations of their gambling laws. Additionally, China passed a law last year requiring developers to publish the reward probabilities for all loot boxes. There are also more than a dozen European countries actively considering regulatory action on loot boxes.

No U.S. jurisdiction is currently regulating loot boxes. Although Congress has not considered a national level response, state politicians in California, Hawaii, Indiana, Washington, and Minnesota are considering bills to address loot boxes. State efforts seem to have mostly stalled at this time. Nevertheless, at least some games are subject to transparency requirements in the U.S. For example, since 2017, Apple Inc. has required all games offered through its iOS App Store to disclose reward probabilities of loot boxes to players prior to purchase.

With so many jurisdictions considering regulation, the next year will be telling for the future of loot boxes. Even absence regulations, players may have other avenues, such as consumer protections laws, through which to seek legal recourse in situations with especially egregious loot box implementations.

—Alex Prati

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview