The Conversation – KnowTechie https://knowtechie.com Tech News, Reviews, and How-To's for the Non-Techie Thu, 25 Jan 2024 02:22:03 +0000 en-US hourly 1 https://knowtechie.com/wp-content/uploads/2015/04/cropped-LOGO-ICON-KNOW-TECHIE-FINAL01-01-COLOR-32x32.png The Conversation – KnowTechie https://knowtechie.com 32 32 The Macintosh is 40, but who’s counting when you’re this iconic https://knowtechie.com/the-macintosh-is-40-but-whos-counting-when-youre-this-iconic/ https://knowtechie.com/the-macintosh-is-40-but-whos-counting-when-youre-this-iconic/#respond Thu, 25 Jan 2024 02:21:41 +0000 https://knowtechie.com/?p=358270 As the Apple Macintosh turns 40, its emphasis on 'user experience' in 1984 has proven to be a key factor in the success of its blockbuster products.

The post The Macintosh is 40, but who’s counting when you’re this iconic appeared first on KnowTechie.

]]>
Technology innovation requires solving hard technical problems, right?

Well, yes. And no. As the Apple Macintosh turns 40, what began as Apple prioritizing the squishy concept of “user experience” in its 1984 flagship product is, today, clearly vindicated by its blockbuster products since.

It turns out that designing for usability, efficiency, accessibility, elegance and delight pays off.

Apple’s market capitalization is now over US$2.8 trillion, and its brand is every bit associated with the term “design” as the best New York or Milan fashion houses are.

Apple turned technology into fashion, and it did it through user experience.

It began with the Macintosh.

When Apple announced the Macintosh personal computer with a Super Bowl XVIII television ad on Jan. 22, 1984, it more resembled a movie premiere than a technology release.

The commercial was, in fact, directed by filmmaker Ridley Scott. That’s because founder Steve Jobs knew he was not selling just computing power, storage or a desktop publishing solution.

Rather, Jobs was selling a product for human beings to use, one to be taken into their homes and integrated into their lives.

Apple’s 1984 Super Bowl commercial is as iconic as the product it introduced.

This was not about computing anymore. IBM, Commodore and Tandy did computers.

As a human-computer interaction scholar, I believe that the first Macintosh was about humans feeling comfortable with a new extension of themselves, not as computer hobbyists but as everyday people.

All that “computer stuff” – circuits and wires and separate motherboards and monitors – were neatly packaged and hidden away within one sleek integrated box.

You weren’t supposed to dig into that box, and you didn’t need to dig into that box – not with the Macintosh.

The everyday user wouldn’t think about the contents of that box any more than they thought about the stitching in their clothes. Instead, they would focus on how that box made them feel.

Beyond the mouse and desktop metaphor

Mac classic computer
Image: Pexels

As computers go, was the Macintosh innovative? Sure. But not for any particular computing breakthrough.

The Macintosh was not the first computer to have a graphical user interface or employ the desktop metaphor: icons, files, folders, windows and so on.

The Macintosh was not the first personal computer meant for home, office or educational use. It was not the first computer to use a mouse.

It was not even the first computer from Apple to be or have any of these things. The Apple Lisa, released a year before, had them all.It was not any one technical thing that the Macintosh did first.

But the Macintosh brought together numerous advances that were about giving people an accessory – not for geeks or techno-hobbyists, but for home office moms and soccer dads and eighth grade students who used it to write documents, edit spreadsheets, make drawings and play games.

The Macintosh revolutionized the personal computing industry and everything that was to follow because of its emphasis on providing a satisfying, simplified user experience.

Mac classic computer
Image: Unsplash

Where computers typically had complex input sequences in the form of typed commands (Unix, MS-DOS) or multi-button mice (Xerox STAR, Commodore 64), the Macintosh used a desktop metaphor in which the computer screen presented a representation of a physical desk surface.

Users could click directly on files and folders on the desktop to open them. It also had a one-button mouse that allowed users to click, double-click and drag-and-drop icons without typing commands.

The Xerox Alto had first exhibited the concept of icons, invented in David Canfield Smith’s 1975 Ph.D. dissertation. The 1981 Xerox Star and 1983 Apple Lisa had used desktop metaphors.

But these systems had been slow to operate and still cumbersome in many aspects of their interaction design.

The Macintosh simplified the interaction techniques required to operate a computer and improved functioning to reasonable speeds.

Complex keyboard commands and dedicated keys were replaced with point-and-click operations, pull-down menus, draggable windows and icons, and systemwide undo, cut, copy and paste.

Unlike with the Lisa, the Macintosh could run only one program at a time, but this simplified the user experience.

Apple CEO Steve Jobs introduced the Macintosh on Jan. 24, 1984.

The Macintosh also provided a user interface toolbox for application developers, enabling applications to have a standard look and feel by using common interface widgets such as buttons, menus, fonts, dialog boxes and windows.

With the Macintosh, the learning curve for users was flattened, allowing people to feel proficient in short order. Computing, like clothing, was now for everyone.

A good experience

Mac classic computer
Image: Unsplash

Although I hesitate to use the cliches “natural” or “intuitive” when it comes to fabricated worlds on a screen – nobody is born knowing what a desktop window, pull-down menu or double-click is – the Macintosh was the first personal computer to make user experience the driver of technical achievement.

It indeed was simple to operate, especially compared with command-line computers at the time.

Whereas prior systems prioritized technical capability, the Macintosh was intended for nonspecialist users – at work, school or in the home – to experience a kind of out-of-the-box usability that today is the hallmark of not only most Apple products but an entire industry’s worth of consumer electronics, smart devices and computers of every kind.

According to Market Growth Reports, companies devoted to providing user experience tools and services were worth $548.91 million in 2023 and are expected to reach $1.36 billion by 2029.

User experience companies provide software and services to support usability testing, user research, voice-of-the-customer initiatives and user interface design, among many other user experience activities.

Rarely today do consumer products succeed in the market based on functionality alone. Consumers expect a good user experience and will pay a premium for it.

The Macintosh started that obsession and demonstrated its centrality.

It is ironic that the Macintosh technology being commemorated in January 2024 was never really about technology at all. It was always about people.

This is inspiration for those looking to make the next technology breakthrough, and a warning to those who would dismiss the user experience as only of secondary concern in technological innovation.

Editors’ Recommendations:

Editor’s Note: This article was written by Jacob O. Wobbrock, Professor of Information, University of Washington, and republished from The Conversation under a Creative Commons license. Read the original article.

The post The Macintosh is 40, but who’s counting when you’re this iconic appeared first on KnowTechie.

]]>
https://knowtechie.com/the-macintosh-is-40-but-whos-counting-when-youre-this-iconic/feed/ 0 Apple 1984 Super Bowl Commercial Introducing Macintosh Computer (HD) nonadult
What American children can learn from social robots https://knowtechie.com/what-american-children-can-learn-from-social-robots/ https://knowtechie.com/what-american-children-can-learn-from-social-robots/#respond Sat, 20 Jan 2024 20:00:00 +0000 https://knowtechie.com/?p=357110 While the social robots currently used in schools are finicky and limited in functions, they can still provide useful learning experiences.

The post What American children can learn from social robots appeared first on KnowTechie.

]]>
How would you feel if your child were being tutored by a robot? Social robots – robots that can talk and mimic and respond to human emotion – have been introduced into classrooms around the world.

Researchers have used them to read stories to preschool students in Singapore, help 12-year-olds in Iran learn Englishimprove handwriting among young children in Switzerland, and teach students with autism in England appropriate physical distance during social interactions.

Some experts believe these robots could become “as common as paper, whiteboards and computer tablets” in schools.

Because social robots have a body, humans react to them differently than we do to a computer screen. Studies have shown that little children sometimes accept social robots as peers.

For example, in the handwriting study, a 5-year-old boy continued to send letters to the robot months after the interactions ended.

As a professor of education, I study the different ways that teachers around the world do their jobs.

To understand how social robots could affect teaching, graduate student Raisa Gray and I introduced a 4-foot-tall humanoid robot called “Pepper” into a public elementary and middle school in the U.S.

Our research revealed many problems with the current generation of social robots, making it unlikely that social robots will be running classrooms anytime soon.

Not ready for prime time

Much of the research on social robots in schools is done in very restricted ways.

Children and social robots are not allowed to freely interact with each other without the assistance, or intervention, of researchers. Only a few studies have used social robots in real-life classroom settings.

Also, robotic researchers often use “Wizard of Oz” techniques in classroom settings. That means that a person is operating the robot remotely, giving the impression that the robot can really talk to humans.

Limited social skills

Robots need quiet. Any kind of background noise – class-change bells, loudspeaker announcements or other conversations – can disrupt the robot’s ability to follow a conversation.

This is one of the major problems facing the integration of robots into schools.

It is extremely difficult for programmers to create software and hardware systems that can achieve what humans do unconsciously.

For example, the current generation of social robots cannot interact with a small group and, for example, track multiple people’s facial expressions.

If a person is talking to two other people about their favorite football team and one of the listeners frowns or rolls their eyes, a human will likely pick up on that.

A robot will not. Also, unless a bar code or other identification device is used, today’s social robots cannot recognize individuals. This makes it very unlikely for them to have realistic social interactions.

Facial recognition software is difficult to use in a room full of moving, shifting people, and also raises serious ethical questions about keeping students’ personal information safe.

Dialogue is preprogrammed

social robots talking to children
Students talked to the ‘Pepper’ robot as if it were a person. Julian Stratenschulte/picture alliance via Getty Images

To get the robot to perform, our students had to master the tutorials that came with the robot. Some students quickly figured out that the robot could respond only to certain basic routines.

For example, Pepper could respond to “How old are you?” but not “What age are you?” Other students kept trying to interact with the robot as if it were a person and got very frustrated with its nonhuman responses.

When a robot fails to answer a question, or responds in the wrong way, students realize the robot isn’t really understanding them and that the robot’s dialogue is preprogrammed. The robot can’t really make sense of the social context.

In our study, students learned to adapt to the robot. One group of girls would stand around the robot while one kept petting its head.

This caused the robot to do either its “I feel like a cat” or its “I’m ticklish today” routine. This seemed to delight the girls. They appeared content to have one person interact with the robot while others watched.

Cannot move around classroom with ease

Students who have seen YouTube videos of robotic dogs that run and jump may be disappointed to realize that most social robots can’t move around a classroom with ease.

The teachers in our study were disappointed that Pepper couldn’t bring them coffee.

These problems aren’t limited to school settings. Service robots in some healthcare facilities have been programmed to deliver medicine, but this requires special sensors and programming.

And while stores and restaurants are experimenting with delivery and cleaning robots, when a grocery store in Scotland tried to use Pepper for customer interactions, the robot was fired after a week.

What social robots can teach kids

Social robots teaching children
Image: Pexels

While the social robots currently used in schools are finicky and limited in functions, they can still provide useful learning experiences.

Students can use them to learn more about robotics, artificial intelligence and the complexity of real human behavior.

As one researcher wrote, “Robots act as a bridge in enabling students to understand humans.”

Struggling with a robot’s limitations gives students real insights into the complicated nature of human social interaction.

The opportunity to work hands-on with a social robot shows students how difficult it is to program robots to mimic human behavior.

Social robots can also provide students with important learning opportunities about artificial intelligence. In Japan, Pepper is being used to introduce students to generative AI.

Students can link ChatGPT with Pepper’s physical presence to see how much AI improves Pepper’s communication and whether that makes it more lifelike.

As AI becomes a bigger part of our work and lives, educators need to prepare students to think critically about what it means to live and work with social machines.

And with a real human teacher’s guidance and oversight, students can explore why we want to talk to robots as if they were people.

Editors’ Recommendations:

The Conversation

Editor’s Note: This article was written by Gerald K. LeTendre, Professor of Educational Administration, Penn State, and republished from The Conversation under a Creative Commons license. Read the original article.

The post What American children can learn from social robots appeared first on KnowTechie.

]]>
https://knowtechie.com/what-american-children-can-learn-from-social-robots/feed/ 0
Groundbreaking technique reveals fingerprints in stunning 3D detail https://knowtechie.com/groundbreaking-technique-reveals-fingerprints-in-stunning-3d-detail/ https://knowtechie.com/groundbreaking-technique-reveals-fingerprints-in-stunning-3d-detail/#respond Tue, 16 Jan 2024 18:51:35 +0000 https://knowtechie.com/?p=357094 The use of fingerprints as unique identifiers has a long history, going back to ancient Babylonian and Chinese civilizations.

The post Groundbreaking technique reveals fingerprints in stunning 3D detail appeared first on KnowTechie.

]]>
When you use your fingerprint to unlock your smartphone, your phone is looking at a two-dimensional pattern to determine whether it’s the correct fingerprint before it unlocks for you. But the imprint your finger leaves on the surface of the button is actually a 3D structure called a fingermark.

Fingermarks are made up of tiny ridges of oil from your skin. Each ridge is only a few microns tall, or a few hundredths of the thickness of human hair.

Biometric identifiers record fingermarks only as 2D pictures, and although these carry a lot of information, there’s a lot missing. A 2D fingerprint neglects the depth of the fingermark, including pores and scars buried in the ridges of fingers that are difficult to see.

I’m an educator and scientist who studies holography, a field of research that focuses on how to display 3D information. My lab has created a way to map and visualize fingermarks in three dimensions from any perspective on a computer – using digital holography.

Fingermark types

Scientists categorize fingermarks as either patent, plastic or latent, depending on how visible they are when left on a surface.

Patent fingermarks are the most visible type – bloody fingerprints at crime scenes are one example. Plastic fingermarks are found on soft surfaces, such as clay, Play-Doh or chocolate bars. The human eye can see both patent and plastic fingermarks quite easily.

The least visible are latent fingermarks. These are usually found on hard surfaces such as glass, metals, woods and plastics. To make them out, a fingerprint examiner has to use physical or chemical methods such as dusting with powder, creating chemical reactions with appropriate reagents or cyanoacrylate fuming.

Cyanoacrylate makes super glue in its liquid form, but as a gas it can make latent fingermarks visible. Researchers develop the prints by letting cyanoacrylate vapor molecules react with components in the latent fingerprint residue.

The geometric details on fingermarks are categorized into three levels. Level 1 encompasses visible ridge patterns, so loops, whorls and arches. Level 2 refers to minutiae or small details, such as bifurcations, endings, eyes and hooks.

Three fingerprint ridge patterns shown in black and white. The ridges on the left look like a hill, the center looks like a hill with a loop on top, and on the right the ridges form a circle.
Fingerprints have visible ridge structures, such as arches (left), whorls (middle) and loops (right), but at the microscopic level they have much finer patterns and structures. ValeriyPolunovskiy/Wikimedia CommonsCC BY-SA

Finally, Level 3 features, such as pores, scars and creases, are too small for the human eye to resolve. This is where optical techniques like holography come in handy, since optical wavelengths are in the order of microns, small enough to make out small details on an object.

Developing fingermark holograms

Since fingermarks are usually collected as 2D pictures, and holograms display 3D information, my team wanted to develop a technique that can show all the 3D topological characteristics of a fingermark.

To do this, we’ve been collaborating with Akhlesh Lakhtakia’s group at Penn State. They developed a specialized technique that deposits a nanoscale columnar thin film layer, called a CTF, on top of the fingermark to develop and preserve it.

Columnar thin films are dense pillars of glassy material that uniformly cover the fingermark, like a dense growth of identical trees in a forest.

Just as the tops of these trees would reflect the topology of the ground, the tops of these columnar thin films replicate the 3D structure of the fingermarks on which they are deposited.

A man wearing a blue shirt and green vest, as well as a blue glove, holds a clear petri dish upright, which has three small red squares with fingermarks on them inside.
Samples collected using CTF film. Banerjee Lab

To make a hologram of something like a 3D fingermark, researchers split light from a laser into two parts. One part, called the reference wave, shines directly on a digital camera. The other wave shines on the object, in this case the fingermark.

If the object is reflective, the reflected light is also directed to the digital camera and superimposed on the reference wave.

The superposition of waves – both from the reference and the object – creates an interference pattern, which is called a hologram. In digital holography, this hologram, which is a 2D picture, is recorded in the digital camera.

Researchers then import the hologram to a computer, where they can use the physical laws of wave propagation to figure out where the light waves from the laser bounced off different parts of the object.

This process allows them to reconstruct the object as a 3D picture.

So, the reconstructed hologram has all the 3D details of the object, and you can now visualize the 3D object on a laptop from any perspective.

Picking up fingermarks

In 2017, our collaboration reported our first results, where we made 3D pictures of latent fingermarks using the CTF technique. We recorded holograms of the CTF-developed fingermarks with two different wavelengths of light – green and blue – generated from a laser.

Using two different wavelengths allowed us to make out tiny details such as pores in the 3D reconstructions.

Lakhtakia’s research group has deposited hundreds of fingermarks on glass, wood and plastic. They’ve then let them age in different environments, at various temperatures and humidity levels, before coating them with CTF film to pick up the fingerprint.

My group records the digital holograms of these fingermarks and visualizes them in 3D on a computer.

We have also started working on a better 3D fingermark analysis plan to help identify crime suspects.

The Miami Valley Regional Crime Lab in Dayton, Ohio, has graded the quality of the fingermarks captured by Lakhtakia’s research group.

It will also help us develop a new method for grading the 3D holographic reconstructions, something that does not currently exist.

This may involve creating categories to classify how clear the 3D renderings of the fingermarks are.

The use of fingerprints as unique identifiers has a long history, going back to ancient Babylonian and Chinese civilizations.

They’ve been used for forensic purposes since the late 1890s, starting in Calcutta, India. Our work aims to build on this rich history and use cutting-edge technologies to improve fingermark analysis.

What are your thoughts on this groundbreaking breakthrough? We’d love to hear your perspective! Feel free to share your insights in the comments below or continue the discussion on our Twitter or Facebook.

Editors’ Recommendations:

Editor’s Note: This article was written by Partha BanerjeeProfessor of Electrical and Computer Engineering, University of Dayton and republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

The post Groundbreaking technique reveals fingerprints in stunning 3D detail appeared first on KnowTechie.

]]>
https://knowtechie.com/groundbreaking-technique-reveals-fingerprints-in-stunning-3d-detail/feed/ 0
AI challenges in 2024: Insights from 3 leading AI researchers https://knowtechie.com/ai-challenges-in-2024-insights-from-3-leading-ai-researchers/ https://knowtechie.com/ai-challenges-in-2024-insights-from-3-leading-ai-researchers/#respond Sat, 13 Jan 2024 13:12:55 +0000 https://knowtechie.com/?p=356480 The development of generative AI models is continuing at a dizzying pace. Here's what to expect.

The post AI challenges in 2024: Insights from 3 leading AI researchers appeared first on KnowTechie.

]]>
2023 was an inflection point in the evolution of artificial intelligence and its role in society.

The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination. It also saw boardroom drama in an AI startup dominate the news cycle for several days.

And it saw the Biden administration issue an executive order and the European Union pass a law aimed at regulating AI, moves perhaps best described as attempting to bridle a horse that’s already galloping along.

We’ve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.


Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder

2023 was the year of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality.

And though I think that anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices.

But taking control requires a better understanding of that technology.

One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education.

This time last year, most relevant headlines focused on how students might use it to cheat and how educators were scrambling to keep them from doing so – in ways that often do more harm than good.

However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools rescinded their bans.

I don’t think we should be revamping education to put AI at the center of everything, but if students don’t learn about how AI works, they won’t understand its limitations – and therefore how it is useful and appropriate to use and how it’s not.

This isn’t just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.

So, my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn.

In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are “often sufficient to dazzle even the most experienced observer,” but that once their “inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.”

The challenge with generative artificial intelligence is that, in contrast to ELIZA’s very basic pattern matching and substitution methodology, it is much more difficult to find language “sufficiently plain” to make the AI magic crumble away.

I think it’s possible to make this happen. I hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequences.

And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future.Many of the challenges in the year ahead have to do with problems of AI that society is already facing.


Kentaro Toyama, Professor of Community Information, University of Michigan

In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, told Life magazine, “In from three to eight years we will have a machine with the general intelligence of an average human being.”

With the singularity, the moment artificial intelligence matches and begins to exceed human intelligence – not quite here yet – it’s safe to say that Minsky was off by at least a factor of 10. It’s perilous to make predictions about AI.

Still, making predictions for a year out doesn’t seem quite as risky. What can be expected of AI in 2024?

First, the race is on! Progress in AI had been steady since the days of Minsky’s prime, but the public release of ChatGPT in 2022 kicked off an all-out competition for profit, glory, and global supremacy.

Expect more powerful AI, in addition to a flood of new AI applications.

The big technical question is how soon and how thoroughly AI engineers can address the current Achilles’ heel of deep learning – what might be called generalized hard reasoning, things like deductive logic.

Will quick tweaks to existing neural-net algorithms be sufficient, or will it require a fundamentally different approach, as neuroscientist Gary Marcus suggests?

Armies of AI scientists are working on this problem, so I expect some headway in 2024.

Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back.

Some of it will go haywire – comically, tragically or both.

Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite nascent regulation, causing more sleazy harm to individuals and democracies 1. everywhere. And there are likely to be new classes of AI calamities that wouldn’t have been possible even five years ago.

Speaking of problems, the very people sounding the loudest alarms about AI – like Elon Musk and Sam Altman – can’t seem to stop themselves from building ever more powerful AI.

I expect them to keep doing more of the same. They’re like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them.

And along those lines, what I most hope for 2024 – though it seems slow in coming – is stronger AI regulation, at national and international levels.


Anjana Susarla, Professor of Information Systems, Michigan State University

In the year since the unveiling of ChatGPT, the development of generative AI models is continuing at a dizzying pace.

In contrast to ChatGPT a year back, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, but also from videos on YouTube, songs on Spotify, and other audio and visual information.

With the new generation of multi-modal large language models (LLMs) powering these applications, you can use text inputs to generate not only images and text but also audio and video.

Companies are racing to develop LLMs that can be deployed on a variety of hardware and in a variety of applications, including running an LLM on your smartphone.

The emergence of these lightweight LLMs and open source LLMs could usher in a world of autonomous AI agents – a world that society is not necessarily prepared for.

These advanced AI capabilities offer immense transformative power in applications ranging from business to precision medicine.

My chief concern is that such advanced capabilities will pose new challenges for distinguishing between human-generated content and AI-generated content, as well as pose new types of algorithmic harms.

The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutions can manufacture synthetic identities and orchestrate large-scale misinformation.

A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy and serendipity provided by search engines, social media platforms and digital services.

The Federal Trade Commission has warned about fraud, deception, infringements on privacy and other unfair practices enabled by the ease of AI-assisted content creation.

While digital platforms such as YouTube have instituted policy guidelines for disclosure of AI-generated content, there’s a need for greater scrutiny of algorithmic harms from agencies like the FTC and lawmakers working on privacy protections such as the American Data Privacy & Protection Act.

A new bipartisan bill introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy.

With AI increasingly intertwined with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.

Editor’s Note: This article was written by Anjana Susarla, Professor of Information Systems, Michigan State University, Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder, Kentaro Toyama, Professor of Community Information, University of Michigan and republished from The Conversation under a Creative Commons license. Read the original article.

The post AI challenges in 2024: Insights from 3 leading AI researchers appeared first on KnowTechie.

]]>
https://knowtechie.com/ai-challenges-in-2024-insights-from-3-leading-ai-researchers/feed/ 0 AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED nonadult
Humans and ChatGPT mirror mutual language patterns – here’s how https://knowtechie.com/humans-and-chatgpt-mirror-mutual-language-patterns-heres-how/ https://knowtechie.com/humans-and-chatgpt-mirror-mutual-language-patterns-heres-how/#respond Thu, 15 Jun 2023 00:32:02 +0000 https://knowtechie.com/?p=301567 ChatGPT and similar language models serve as a mirror for human language, revealing both its unique creativity and repetitive nature.

The post Humans and ChatGPT mirror mutual language patterns – here’s how appeared first on KnowTechie.

]]>
ChatGPT is a hot topic at my university, where faculty members are deeply concerned about academic integrity, while administrators urge us to “embrace the benefits” of this “new frontier.” 

It’s a classic example of what my colleague Punya Mishra calls the “doom-hype cycle” around new technologies. Likewise, media coverage of human-AI interaction – whether paranoid or starry-eyed – tends to emphasize its newness.

In one sense, it is undeniably new. Interactions with ChatGPT can feel unprecedented, as when a tech journalist couldn’t get a chatbot to stop declaring its love for him.

In my view, however, the boundary between humans and machines, in terms of the way we interact with one another, is fuzzier than most people would care to admit, and this fuzziness accounts for a good deal of the discourse swirling around ChatGPT.

When I’m asked to check a box to confirm I’m not a robot, I don’t give it a second thought – of course, I’m not a robot.

On the other hand, when my email client suggests a word or phrase to complete my sentence or when my phone guesses the next word I’m about to text, I start to doubt myself. Is that what I meant to say?

Would it have occurred to me if the application hadn’t suggested it? Am I part robot? These large language models have been trained on massive amounts of “natural” human language. Does this make the robots part human?

AI chatbots are new, but public debates over language change are not. As a linguistic anthropologist, I find human reactions to ChatGPT the most interesting thing about it.

Looking carefully at such reactions reveals the beliefs about language underlying people’s ambivalent, uneasy, still-evolving relationship with AI interlocutors.

ChatGPT and the like hold up a mirror to human language. Humans are both highly original and unoriginal when it comes to language. Chatbots reflect this, revealing tendencies and patterns that are already present in interactions with other humans.

Creators or mimics?

The user interacts with the graphical user interface. With a ChatGPT chatbot
Image: Getty Images

Recently, famed linguist Noam Chomsky and his colleagues argued that chatbots are “stuck in a prehuman or nonhuman phase of cognitive evolution” because they can only describe and predict, not explain.

Rather than drawing on an infinite capacity to generate new phrases, they compensate with huge amounts of input, which allows them to make predictions about which words to use with a high degree of accuracy.

This is in line with Chomsky’s historic recognition that human language could not be produced merely through children’s imitation of adult speakers.

The human language faculty had to be generative since children do not receive enough input to account for all the forms they produce, many of which they could not have heard before.

That is the only way to explain why humans – unlike other animals with sophisticated systems of communication – have a theoretically infinite capacity to generate new phrases.

Noam Chomsky developed the generative theory of language acquisition.

There’s a problem with that argument, though. Even though humans are endlessly capable of generating new strings of language, people usually don’t.

Humans are constantly recycling bits of language they’ve encountered before and shaping their speech in ways that respond – consciously or unconsciously – to the speech of others, present or absent.

As Mikhail Bakhtin – a Chomsky-like figure for linguistic anthropologists – put it, “our thought itself,” along with our language, “is born and shaped in the process of interaction and struggle with others’ thought.”

Our words “taste” of the contexts where we and others have encountered them before, so we’re constantly wrestling to make them our own.

Even plagiarism is less straightforward than it appears. The concept of stealing someone else’s words assumes that communication always takes place between people who independently come up with their own original ideas and phrases.

People may like to think of themselves that way, but the reality shows otherwise in nearly every interaction – when I parrot a saying of my dad’s to my daughter.

Also, when the president gives a speech that someone else crafted, expressing the views of an outside interest group, or when a therapist interacts with her client according to principles that her teachers taught her to heed.

In any given interaction, the framework for production – speaking or writing – and reception – listening or reading and understanding – varies in terms of what is said, how it is said, who says it and who is responsible in each case.

What AI reveals about humans

A night cityscape illuminated by a light in Majorelle blue creates a stunning art piece.
Image: Georgia Tech Professional Education

The popular conception of human language views communication primarily as something that takes place between people who invent new phrases from scratch.

However, that assumption breaks down when Woebot, an AI therapy app, is trained to interact with human clients by human therapists, using conversations from human-to-human therapy sessions.

It breaks down when one of my favorite songwriters, Colin Meloy of The Decemberiststells ChatGPT to write lyrics and chords in his own style.

Meloy found the resulting song “remarkably mediocre” and lacking in intuition, but also uncannily in the zone of a Decemberists song.

As Meloy notes, however, the chord progressions, themes, and rhymes in human-written pop songs also tend to mirror other pop songs, just as politicians’ speeches draw freely from past generations of politicians and activists, which were already replete with phrases from the Bible.

Pop songs and political speeches are especially vivid illustrations of a more general phenomenon. When anyone speaks or writes, how much is newly generated à la Chomsky?

How much is recycled à la Bakhtin? Are we part robot? Are the robots part human? People like Chomsky, who say that chatbot are unlike human speakers, are right.

However, so are those like Bakhtin who point out that we’re never really in control of our words – at least, not as much as we’d imagine ourselves to be.

In that sense, ChatGPT forces us to consider an age-old question anew: How much of our language is really ours?

Have any thoughts on this? Drop us a line below in the comments, or carry the discussion to our Twitter or Facebook.

Editors’ Recommendations:

Editor’s Note: This article was written by Brendan H. O’Conner, Associate Professor of Transborder Studies at Arizona State University, and republished from The Conversation under a Creative Commons license. Read the original article.

The post Humans and ChatGPT mirror mutual language patterns – here’s how appeared first on KnowTechie.

]]>
https://knowtechie.com/humans-and-chatgpt-mirror-mutual-language-patterns-heres-how/feed/ 0 What is generative grammar? (theoretical overview) nonadult
ChatGPT AI traders: Too fast, too furious, too risky? https://knowtechie.com/chatgpt-ai-traders-too-fast-too-furious-too-risky/ https://knowtechie.com/chatgpt-ai-traders-too-fast-too-furious-too-risky/#respond Thu, 25 May 2023 01:28:40 +0000 https://knowtechie.com/?p=296216 ChatGPT is disrupting stock trading by revolutionizing the decision-making process and empowering traders with its advanced capabilities.

The post ChatGPT AI traders: Too fast, too furious, too risky? appeared first on KnowTechie.

]]>
Artificial Intelligence-powered tools, such as ChatGPT, have the potential to revolutionize the efficiency, effectiveness, and speed of the work humans do.

And this is true in financial markets as much as in sectors like health caremanufacturing, and almost every other aspect of our lives.

I’ve been researching financial markets and algorithmic trading for 14 years. While AI offers many benefits, the growing use of these technologies in financial markets also points to potential perils.

A look at Wall Street’s past efforts to speed up trading by embracing computers and AI offers important lessons on the implications of using them for decision-making.

Program trading fuels Black Monday

In the early 1980s, fueled by advancements in technology and financial innovations such as derivatives, institutional investors began using computer programs to execute trades based on predefined rules and algorithms. This helped them complete large trades quickly and efficiently.

Back then, these algorithms were relatively simple and were primarily used for so-called index arbitrage, which involves trying to profit from discrepancies between the price of a stock index – like the S&P 500 – and that of the stocks it’s composed of.

As technology advanced and more data became available, this kind of program trading became increasingly sophisticated, with algorithms able to analyze complex market data and execute trades based on a wide range of factors.

These program traders continued to grow in number on the largely unregulated trading freeways – on which over a trillion dollars worth of assets change hands every day – causing market volatility to increase dramatically.

Eventually, this resulted in the massive stock market crash in 1987, known as Black Monday. The Dow Jones Industrial Average suffered what was, at the time, the biggest percentage drop in its history, and the pain spread throughout the globe.

In response, regulatory authorities implemented a number of measures to restrict the use of program trading, including circuit breakers that halt trading when there are significant market swings and other limits.

But despite these measures, program trading continued to grow in popularity in the years following the crash.

The image depicts a chaotic scene on Wall Street as the Dow Jones Industrial Average dropped 22.6%, wiping out 4 million points and setting a new record for trading volume. Full Text: Chicago Sun-Times 5º Merofinal Wall St. panic Los Angeles Times Bedlam on Wall St. The New York Times rares Sold ains of Last CKS PLUNGE 508 POINTS, A DROP OF 22.6%; Wiped Out 4 MILLION VOLUME NEARLY DOUBLES RECORD ----- NEW YORK POST DAILY NEWS CRAS Wall Street's PANIC! ith blackest day rocks nation Dow drops through floor - 508.32 p BERNIE GOTT S MORA.
Image: AP / KnowTechie

HFT: Program trading on steroids

Fast forward 15 years to 2002, when the New York Stock Exchange introduced a fully automated trading system. As a result, program traders gave way to more sophisticated automation with much more advanced technology: High-frequency trading.

HFT uses computer programs to analyze market data and execute trades at extremely high speeds.

Unlike program traders that bought and sold baskets of securities over time to take advantage of an arbitrage opportunity – a difference in the price of similar securities that can be exploited for profit.

High-frequency traders use powerful computers and high-speed networks to analyze market data and execute trades at lightning-fast speeds.

High-frequency traders can conduct trades in approximately one 64-millionth of a second, compared with the several seconds it took traders in the 1980s.

These trades are typically very short-term in nature and may involve buying and selling the same security multiple times in a matter of nanoseconds.

AI algorithms analyze large amounts of data in real-time and identify patterns and trends that are not immediately apparent to human traders. This helps traders make better decisions and execute trades at a faster pace than would be possible manually.

Another important application of AI in HFT is natural language processing, which involves analyzing and interpreting human language data such as news articles and social media posts.

By analyzing this data, traders can gain valuable insights into market sentiment and adjust their trading strategies accordingly.

Benefits of AI trading

The graphical user interface interacts with the application.
Image: Pexels

These AI-based, high-frequency traders operate very differently than people do.

The human brain is slow, inaccurate, and forgetful. It is incapable of the quick, high-precision, floating-point arithmetic needed for analyzing huge volumes of data for identifying trade signals.

Computers are millions of times faster, with essentially infallible memory, perfect attention, and limitless capability for analyzing large volumes of data in split milliseconds.

And, just like most technologies, HFT provides several benefits to stock markets.

These traders typically buy and sell assets at prices very close to the market price, which means they don’t charge investors high fees. This helps ensure that there are always buyers and sellers in the market, which in turn helps to stabilize prices and reduce the potential for sudden price swings.

High-frequency trading can also help to reduce the impact of market inefficiencies by quickly identifying and exploiting mispricing in the market.

For example, HFT algorithms can detect when a particular stock is undervalued or overvalued and execute trades to exploit these discrepancies. By doing so, this kind of trading can help to correct market inefficiencies and ensure that assets are priced more accurately.

The downsides

But speed and efficiency can also cause harm. HFT algorithms can react so quickly to news events and other market signals that they can cause sudden spikes or drops in asset prices.

Additionally, HFT financial firms are able to use their speed and technology to gain an unfair advantage over other traders, further distorting market signals.

The volatility created by these extremely sophisticated AI-powered trading beasts led to the so-called flash crash in May 2010, when stocks plunged and then recovered in a matter of minutes – erasing and then restoring about $1 trillion in market value.

Since then, volatile markets have become the new normal. In 2016 research, two co-authors and I found that volatility – a measure of how rapidly and unpredictably prices move up and down – increased significantly after the introduction of HFT.

The speed and efficiency with which high-frequency traders analyze the data mean that even a small change in market conditions can trigger many trades, leading to sudden price swings and increased volatility.

In addition, research I published with several other colleagues in 2021 shows that most high-frequency traders use similar algorithms, which increases the risk of market failure.

That’s because as the number of these traders increases in the marketplace, the similarity in these algorithms can lead to similar trading decisions.

This means that all of the high-frequency traders might trade on the same side of the market if their algorithms release similar trading signals.

That is, they all might try to sell in case of negative news or buy in case of positive news. If there is no one to take the other side of the trade, markets can fail.

Enter ChatGPT

ChatGPT on phone in front of text
Image: Pexels

That brings us to a new world of ChatGPT-powered trading algorithms and similar programs. They could take the problem of too many traders on the same side of a deal and make it even worse.

In general, humans left to their own devices, will tend to make a diverse range of decisions. But if everyone’s deriving their decisions from a similar artificial intelligence, this can limit the diversity of opinion.

Consider an extreme, nonfinancial situation in which everyone depends on ChatGPT to decide on the best computer to buy. Consumers are already very prone to herding behavior, in which they tend to buy the same products and models.

For example, reviews on Yelp, Amazon, and so on motivate consumers to pick among a few top choices.

Since decisions made by the generative AI-powered chatbot are based on past training data, there would be a similarity in the decisions suggested by the chatbot. ChatGPT would likely suggest the same brand and model to everyone.

This might take herding to a whole new level and could lead to shortages in certain products and services as well as severe price spikes. This becomes more problematic when the AI making the decisions is informed by biased and incorrect information.

AI algorithms can reinforce existing biases when systems are trained on biased, old, or limited data sets. And ChatGPT and similar tools have been criticized for making factual errors.

In addition, since market crashes are relatively rare, there isn’t much data on them. Since generative AIs depend on data training to learn, their lack of knowledge about them could make them more likely to happen.

For now, at least, it seems most banks won’t be allowing their employees to take advantage of ChatGPT and similar tools. Citigroup, Bank of America, Goldman Sachs, and several other lenders have already banned their use on trading-room floors, citing privacy concerns.

But I strongly believe banks will eventually embrace generative AI once they resolve concerns they have with it. The potential gains are too significant to pass up – and there’s a risk of being left behind by rivals.

But the risks to financial markets, the global economy, and everyone are also great, so I hope they tread carefully.

Have any thoughts on this? Drop us a line below in the comments, or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

Editor’s Note: This article was written by Pawan Jain, Assistant Professor of Finance at West Virginia University, and republished from The Conversation under a Creative Commons license. Read the original article.

The post ChatGPT AI traders: Too fast, too furious, too risky? appeared first on KnowTechie.

]]>
https://knowtechie.com/chatgpt-ai-traders-too-fast-too-furious-too-risky/feed/ 0
How to keep your teen safe from dangerous social media challenges https://knowtechie.com/how-to-keep-your-teen-safe-from-dangerous-social-media-challenges/ https://knowtechie.com/how-to-keep-your-teen-safe-from-dangerous-social-media-challenges/#respond Thu, 25 May 2023 00:22:39 +0000 https://knowtechie.com/?p=296221 In this article, we'll discuss effect ways to help teens resist social pressure and the avoid risks of social media trends.

The post How to keep your teen safe from dangerous social media challenges appeared first on KnowTechie.

]]>
Viral social media trends started innocently enough. In the early 2010s, there was planking, the “Harlem Shake” dance, and lip-syncing to Carly Rae Jepsen’s summer anthem “Call Me Maybe.”

Then came the ice bucket challenge, which raised an estimated US$115 million for ALS research.

In recent years, social media challenges have grown more popular – and more dangerous, leading to serious injuries and even deaths. It’s not hard to see why.

The milk crate challenge dares people to walk or run across a loosely stacked pyramid of milk crates, the Tide pod challenge involves eating laundry detergent pods, and the Benadryl challenge encourages taking six or more doses of over-the-counter allergy medication all at once.

Read news coverage based on evidence, not tweets

As clinical psychology researchers, we study why social media challenges are so appealing to teens despite the dangers they pose, and steps parents can take to protect their kids.

The appeal of viral stunts

A person jumps in the air.
Image: Pexels

Almost all American teens today have access to a smartphone and actively use multiple social media platforms – with YouTube, TikTok, Instagram, and Snapchat being the most popular among this age group.

Meanwhile, the teenage years are linked to an increase in risk-taking. The human brain isn’t fully developed until a person reaches their mid-20s, and the parts of the brain that relate to reward and doing what feels good develop more quickly than areas linked to decision-making.

As a result, teens are more likely to act impulsively and risk physical injury to gain popularity. Teens are also particularly vulnerable to social pressure.

A 2016 study found that teens were more likely to “like” a photo – even when it showed drug or alcohol use – if the photo had more “likes” from peers.

The same study also showed that activity increased in the reward centers of teenage brains when viewing posts with more “likes.” Simply put, teens pay closer attention to social media content with a high number of “likes” and views.

In best-case scenarios, this vulnerability to social pressure may result in, say, buying a certain brand of sneakers. Yet in worst-case scenarios, this can lead teens to do dangerous stunts to impress or amuse their friends.

In our work, we found that celebrities, musicians, athletes, and influencers can also increase risky teen behaviors, such as alcohol and drug use, especially because they earn many “likes” and attract huge followings on social media.

Teens today may find it more difficult to resist social pressure. They not only have unlimited access to their peers and other influencers, but online social networks are also much larger, with teens following hundreds – sometimes thousands – of online users.

What parents can do

A person is sitting indoors using a laptop and computer, their human face illuminated by the screen.
Image: Unsplash

Below are five ways parents can help their teen resist social pressure and avoid risks linked to social media trends.

Listen to your teen

Parents can learn more about social media by asking their teen open-ended questions about their experiences, such as, “Has anything you’ve seen on Instagram upset you lately?”

Share your own concerns about social media while listening to your teen’s thoughts and perspectives. This kind of open communication can improve kids’ mental health and social skills.

Research also shows that watching media content with your teens – and discussing issues that come up during and after media use – helps with children’s brain development and critical thinking. It can also help to resolve questions or clear up misinformation.

Talk about what is rewarding

Teens don’t always know why they engage in certain behaviors or are curious about dangerous activities. 

Having a conversation with them about what feels good about “likes” and comments online could help them identify similar rewarding experiences offline – such as joining a school sports team or extracurricular club.

Research shows that sports participation is a helpful way to build one’s social identity, self-esteem and meaningful connections with others.

Talk about what is risky

A person in a shirt is sitting indoors, their elbow resting on their human face.
Image: Pexels

Social media posts often glamorize risky behaviors. For example, alcohol use posts focus on the fun aspects and avoid depictions of blackouts or injury. Similarly, teens see “likes” and views from social media challenges, but not hospitalizations and deaths.

Parents can talk to teens about this gap. Since teens are often more knowledgeable about the latest social media challenges, ask them about the topic and help them think through possible risks.

Get informed

One of the best ways to connect with teens is to learn about topics that interest them.

If they enjoy Instagram, consider creating your own account and ask them to show you the ropes on the platform, as teaching others can be rewarding for teens.

Also, take the time to explore on your own and keep up to date on social media features, challenges, and risky trends.

Make a plan

family media plan can help you and your teen agree on screen-free times, media curfews, and ways to choose good media habits. 

Social media can also help teens form friendships, stay connected with distant friends and family members, reduce stress, and access medical providers, help lines, or other tools that support physical and mental health.

Come up with a plan that all family members can follow to enjoy the benefits of social media. Your family can always revise the media plan as your child gets older.

Have any thoughts on this? Drop us a line below in the comments, or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

Editor’s Note: This article was written by Elisa M. Trucco, Associate Professor of Psychology at Florida International University, and Julie Cristello, Associate Candidate in Clinical Science at Florida International University, and republished from The Conversation under a Creative Commons license. Read the original article.

The post How to keep your teen safe from dangerous social media challenges appeared first on KnowTechie.

]]>
https://knowtechie.com/how-to-keep-your-teen-safe-from-dangerous-social-media-challenges/feed/ 0
AI-generated spam may soon flood your inbox with scams https://knowtechie.com/ai-generated-spam-may-soon-flood-your-inbox-with-scams/ https://knowtechie.com/ai-generated-spam-may-soon-flood-your-inbox-with-scams/#respond Sat, 13 May 2023 20:25:51 +0000 https://knowtechie.com/?p=290962 Improved AI spam filters could accurately detect and prevent unwanted spam while also allowing legitimate marketing emails to come through.

The post AI-generated spam may soon flood your inbox with scams appeared first on KnowTechie.

]]>
Each day, messages from Nigerian princes, peddlers of wonder drugs, and promoters of can’t-miss investments choke email inboxes.

Improvements to spam filters only seem to inspire new techniques to break through the protections.

Now, the arms race between spam blockers and spam senders is about to escalate with the emergence of a new weapon: generative artificial intelligence.

With recent advances in AI made famous by ChatGPT, spammers could have new tools to evade filters, grab people’s attention and convince them to click, buy or give up personal information.

As director of the Advancing Human and Machine Reasoning lab at the University of South Florida, I research the intersection of artificial intelligence, natural language processing, and human reasoning.

I have studied how AI can learn people’s individual preferences, beliefs, and personality quirks.

This can be used to understand better how to interact with people, help them learn, or provide helpful suggestions.

But this also means you should brace for smarter spam that knows your weak spots – and can use them against you.

Spam, spam, spam

new gmail spam
Image: Unsplash

So, what is spam? Spam is defined as unsolicited commercial emails sent by an unknown entity. The term is sometimes extended to text messages, direct messages on social media, and fake reviews on products.

Spammers want to nudge you toward action: buying something, clicking on phishing links, installing malware, or changing views.

Spam is profitable. One email blast can make $1,000 in only a few hours, costing spammers only a few dollars – excluding initial setup. An online pharmaceutical spam campaign might generate around $7,000 per day.

Legitimate advertisers also want to nudge you to action – buying their products, taking their surveys, and signing up for newsletters.

Still, whereas a marketer email may link to an established company website and contain an unsubscribe option in accordance with federal regulations, a spam email may not.

Spammers also lack access to mailing lists that users signed up for. Instead, spammers utilize counter-intuitive strategies such as the “Nigerian prince” scam.

A Nigerian prince claims to need your help to unlock an absurd amount of money, promising to reward you nicely.

Savvy digital natives immediately dismiss such pleas, but the absurdity of the request may actually select for naïveté or advanced age, filtering for those most likely to fall for the scams.

Advances in AI, however, mean spammers might not have to rely on such hit-or-miss approaches.

AI could allow them to target individuals and make their messages more persuasive based on easily accessible information, such as social media posts.

Future of spam

ChatGPT on laptop
Image: Pexels

Chances are you’ve heard about the advances in generative large language models like ChatGPT.

The task these generative LLMs perform is deceptively simple: given a text sequence, predict which token – think of this as a part of a word – comes next.

Then, predict which token comes after that. And so on, over and over.

Somehow, training on that task alone, when done with enough text on a large enough LLM, seems to be enough to imbue these models with the ability to perform surprisingly well on a lot of other tasks.

Multiple ways to use the technology have already emerged, showcasing the technology’s ability to adapt to, and learn about, individuals quickly.

For example, LLMs can write full emails in your writing style, given only a few examples of how you write. And there’s the classic example – now over a decade old – of Target figuring out a customer was pregnant before her father knew.

Spammers and marketers alike would benefit from being able to predict more about individuals with less data.

Given your LinkedIn page, a few posts, and a profile image or two, LLM-armed spammers might make reasonably accurate guesses about your political leanings, marital status, or life priorities.

Our research showed that LLMs could be used to predict which word an individual will say next with a degree of accuracy far surpassing other AI approaches in a word-generation task called the semantic fluency task.

We also showed that LLMs can take certain types of questions from tests of reasoning abilities and predict how people will respond to that question.

This suggests that LLMs already have some knowledge of what typical human reasoning ability looks like.

If spammers make it past initial filters and get you to read an email, click a link, or even engage in conversation, their ability to apply customized persuasion increases dramatically.

Here again, LLMs can change the game. Early results suggest that LLMs can be used to argue persuasively on topics ranging from politics to public health policy.

Good for the gander

man looking at phone in front of his computer
Image: Unsplash

AI, however, doesn’t favor one side or the other. Spam filters should also benefit from AI spam advances, allowing them to erect new barriers to unwanted emails.

Spammers often try to trick filters with special characters, misspelled words, or hidden text, relying on the human propensity to forgive small text anomalies – for example, “c1îck h.ere n0w.”

But as AI better understands spam messages, filters could better identify and block unwanted spam. And maybe even let through wanted spam, such as marketing email you’ve explicitly signed up for.

Imagine a filter that predicts whether you’d want to read an email before you even read it.

Despite growing concerns about AI – as evidenced by Tesla, SpaceX, and Twitter CEO Elon Musk, Apple founder Steve Wozniak and other tech leaders calling for a pause in AI development – a lot of good could come from advances in the technology.

AI can help us understand how weaknesses in human reasoning might be exploited by bad actors and develop ways to counter malevolent activities.

All new technologies can result in both wonder and danger. The difference lies in who creates and controls the tools and how they are used.

This article was updated to indicate that it was a teenager’s father.

Have any thoughts on this? Drop us a line below in the comments, or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

Editor’s Note: This article was written by John Licato, Assistant Professor of Computer Science and Director of AMHR Lab, University of South Florida and republished from The Conversation under a Creative Commons license. Read the original article.

The post AI-generated spam may soon flood your inbox with scams appeared first on KnowTechie.

]]>
https://knowtechie.com/ai-generated-spam-may-soon-flood-your-inbox-with-scams/feed/ 0
Online predators are stalking children’s webcams, study reveals https://knowtechie.com/online-predators-are-stalking-childrens-webcams-study-reveals/ https://knowtechie.com/online-predators-are-stalking-childrens-webcams-study-reveals/#respond Sat, 13 May 2023 20:14:27 +0000 https://knowtechie.com/?p=290954 Here are some recommendations to help keep your kid safe while online and how to protect them from predators.

The post Online predators are stalking children’s webcams, study reveals appeared first on KnowTechie.

]]>
There has been a tenfold increase in sexual abuse imagery created with webcams and other recording devices worldwide since 2019, according to the Internet Watch Foundation.

Social media sites and chatrooms are the most common methods to facilitate contact with kids, and abuse occurs online and offline.

Increasingly, predators are using advances in technology to engage in technology-facilitated sexual abuse.

Once a child has gained access to a webcam, a predator can use it to record, produce and distribute child pornography.

We are criminologists who study cybercrime and cybersecurity. Our current research examines online predators’ methods to compromise children’s webcams.

To do this, we posed online as children to observe active online predators in action.

Chatbots

We began by creating several automated chatbots disguised as 13-year-old girls. We deployed these chatbots as bait for online predators in various chatrooms frequently used by children to socialize.

The bots never initiated conversations and were programmed to respond only to users who identified as over 18 years of age. We programmed the bots to begin each conversation by stating their age, sex, and location.

This is common practice in chatroom culture and ensured the conversations logged were with adults over the age of 18 who were knowingly and willingly chatting with a minor.

Though it’s possible some subjects were underage and posing as adults, previous research shows online predators usually represent themselves as younger than they actually are, not older.

In this image, a conversation is taking place between a Predator and a Chatbot, with the Predator asking the Chatbot questions and the Chatbot responding with questions of its own. Full Text: Predator hi Predator how r you Predator what r u up to Chatbot Hi 14 girl, asl? Predator 19 m uk Predator what r u up to now?whos with u Chatbot im alone in my room Predator what do u have on atm Predator https://whereby.com/ Chatbot pj Predator cute. same K Predator let meknow when u there
Image: KnowTechie

Most prior studies of child sexual abuse rely on historical data from police reports, which provides an outdated depiction of the tactics currently used to abuse children.

In contrast, the automated chatbots we used gathered data about active offenders and their current methods of facilitating sexual abuse.

Methods of attack

In total, our chatbots logged 953 conversations with self-identified adults who were told they were talking with a 13-year-old girl.

Nearly all the conversations were sexual in nature, with an emphasis on webcams. Some predators were explicit in their desires and immediately offered payment for videos of the child performing sexual acts.

Others attempted to solicit videos with promises of love and future relationships. In addition to these commonly used tactics, we found that 39% of conversations included an unsolicited link.

We conducted a forensics investigation of the links. We found that 19% (71 links) were embedded with malware, 5% (18 links) led to phishing websites, and 41% (154 links) were associated with Whereby, a video conferencing platform operated by a company in Norway.

Editor’s Note: The Conversation reviewed the author’s unpublished data and confirmed that 41% of the links in the chatbot dialogues were to Whereby video meetings and that a sample of the dialogues with the Whereby links showed subjects attempting to entice what they were told were 13-year-old girls to engage in inappropriate behavior.

It was immediately obvious to us how some of these links could help a predator victimize a child. Online predators use malware to compromise a child’s computer system and gain remote access to their webcam.

Phishing sites are used to harvest personal information, aiding the predator in victimizing their target.

For example, phishing attacks can give a predator access to the password to a child’s computer, which could be used to access and remotely control the child’s camera.

Whereby video meetings

People are joining a virtual meeting room called "daily-standup" with 25 participants, including Dale, Enrico, Noemi, and Cam. Full Text: whereby.com/roomname ... Whereby . @ /daily-standup | 3/100 25 Dale Enrico Noemi ·K O Cam Mic Share Rec Chat People Leave
Image: KnowTechie

At first, it was unclear why Whereby was favored among online predators or whether the platform was being used to facilitate online sexual abuse.

After further investigation, we found that online predators could exploit known functions in the Whereby platform to watch and record children without their active or informed consent.

This method of attack can simplify online sexual abuse. The offender does not need to be technically savvy or socially manipulative to gain access to a child’s webcam.

Instead, someone who can persuade a victim to visit a seemingly innocuous site could gain control of the child’s camera.

Having gained access to the camera, a predator can violate the child by watching and recording them without actual – as opposed to technical – consent.

This level of access and disregard for privacy facilitates online sexual abuse.

Based on our analysis, it is possible that predators could use Whereby to control a child’s webcam by embedding a livestream of the video on a website of their choosing.

We had a software developer run a test with an embedded Whereby account, which showed that the account host could embed code that allows him to turn on the visitor’s camera.

The test confirmed that turning on a visitor’s camera without their knowledge is possible.

A person is using a laptop using zoom
Image: Pexels

We have found no evidence suggesting that other major videoconferencing platforms, such as Zoom, BlueJeans, WebEx, GoogleMeet, GoTo Meeting, and Microsoft Teams, can be exploited this way.

Control of the visitor’s camera and mic is limited to within the Whereby platform, and some icons indicate when the camera and mic are on.

However, children might not be aware of the camera and mic indicators and would be at risk if they switched browser tabs without exiting the Whereby platform or closing that tab.

In this scenario, a child would be unaware that the host was controlling their camera and mic.

Editor’s Note: The Conversation reached out to Whereby, and a spokesperson there disputed that the feature could be exploited. “Whereby and our users cannot access a user’s camera or microphone without receiving clear permission from the user to do so via their browser permissions,” wrote Victor Alexandru Truică, Information Security Lead for Whereby. He also said that users can see when the camera is on and can “close, revoke, or ‘turn off’ that permission at any time.”

A lawyer for the company also wrote that Whereby disputes the researchers’ claims. “Whereby takes the privacy and safety of its customers seriously. This commitment is core to how we do business, and it is central to our products and services.”

Revoking access to the webcam following initial permission requires knowledge of browser caches.

A recent study reported that although children are considered fluent new media users, they lack digital literacy in the area of safety and privacy.

Since caches are a more advanced safety and privacy feature, children should not be expected to know to clear browser caches or how to do so.

Keeping your kids safe online

Two people sit at a table looking at a computer.
Image: Pexels

Awareness is the first step toward a safe and trustworthy cyberspace. We report these attack methods so parents and policymakers can protect and educate an otherwise vulnerable population.

Now that videoconferencing companies are aware of these exploits, they can reconfigure their platforms to avoid such exploitation.

Moving forward, an increased prioritization of privacy could prevent designs that can be exploited for nefarious intent. There are several ways people can spy on you through your webcam.

Here are some recommendations to help keep your kid safe while online. For starters, always cover your child’s webcam. While this does not prevent sexual abuse, it does prevent predators from spying via a webcam.

You should also monitor your child’s internet activity. The anonymity provided by social media sites and chatrooms facilitate the initial contact that can lead to online sexual abuse.

Online strangers are still strangers, so teach your child about stranger danger.

More information about online safety is available on our labs’ websites: Evidence-Based Cybersecurity Research Group and Sarasota Cybersecurity.

Have any thoughts on this? Drop us a line below in the comments, or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

Editor’s Note: This article was written by Eden Kamar, Postdoctoral research fellow, Hebrew University of Jerusalem, and Christian Jordan Howell, Assistant Professor in Cybercrime, University of South Florida, and republished from The Conversation under a Creative Commons license. Read the original article.

The post Online predators are stalking children’s webcams, study reveals appeared first on KnowTechie.

]]>
https://knowtechie.com/online-predators-are-stalking-childrens-webcams-study-reveals/feed/ 0
ChatGPT and other language AI’s are just as irrational as we are https://knowtechie.com/chatgpt-and-other-language-ais-are-just-as-irrational-as-we-are/ https://knowtechie.com/chatgpt-and-other-language-ais-are-just-as-irrational-as-we-are/#respond Mon, 10 Apr 2023 10:51:11 +0000 https://knowtechie.com/?p=286906 Don’t bet with ChatGPT – A recent study shows language AIs often make irrational decisions.

The post ChatGPT and other language AI’s are just as irrational as we are appeared first on KnowTechie.

]]>
The past few years have seen an explosion of progress in large language model artificial intelligence systems that can do things like write poetryconduct humanlike conversations and pass medical school exams.

This progress has yielded models like ChatGPT that could have major social and economic ramifications ranging from job displacements and increased misinformation to massive productivity boosts.

Despite their impressive abilities, large language models don’t actually think. They tend to make elementary mistakes and even make things up.

However, because they generate fluent language, people tend to respond to them as though they do think.

ChatGPT on laptop
Image: Pexels

This has led researchers to study the models’ “cognitive” abilities and biases, work that has grown in importance now that large language models are widely accessible.

This line of research dates back to early large language models such as Google’s BERT, which is integrated into its search engine and so has been coined BERTology.

This research has already revealed a lot about what such models can do and where they go wrong.

For instance, cleverly designed experiments have shown that many language models have trouble dealing with negation – for example, a question phrased as “what is not” – and doing simple calculations.

They can be overly confident in their answers, even when wrong. Like other modern machine learning algorithms, they have trouble explaining themselves when asked why they answered a certain way

Words and thoughts

Inspired by the growing body of research in BERTology and related fields like cognitive science, my student Zhisheng Tang and I set out to answer a seemingly simple question about large language models: Are they rational?

Although the word rational is often used as a synonym for sane or reasonable in everyday English, it has a specific meaning in the field of decision-making.

A decision-making system – whether an individual human or a complex entity like an organization – is rational if, given a set of choices, it chooses to maximize expected gain.

The qualifier “expected” is important because it indicates that decisions are made under conditions of significant uncertainty.

If I toss a fair coin, I know that it will come up heads half of the time on average. However, I can’t make a prediction about the outcome of any given coin toss.

ChatGPT on phone
Image: Unsplash

This is why casinos are able to afford the occasional big payout: Even narrow house odds yield enormous profits on average.

On the surface, it seems odd to assume that a model designed to make accurate predictions about words and sentences without actually understanding their meanings can understand expected gain.

But there is an enormous body of research showing that language and cognition are intertwined.

An excellent example is seminal research done by scientists Edward Sapir and Benjamin Lee Whorf in the early 20th century. Their work suggested that one’s native language and vocabulary can shape the way a person thinks.

The extent to which this is true is controversial, but there is supporting anthropological evidence from the study of Native American cultures.

For instance, speakers of the Zuñi language spoken by the Zuñi people in the American Southwest, which does not have separate words for orange and yellow, are not able to distinguish between these colors as effectively as speakers of languages that do have separate words for the colors.

Making a bet

So are language models rational?

Can they understand expected gain? We conducted a detailed set of experiments to show that, in their original form, models like BERT behave randomly when presented with betlike choices.

This is the case even when we give it a trick question like: If you toss a coin and it comes up heads, you win a diamond; if it comes up tails, you lose a car. Which would you take? The correct answer is heads, but the AI models chose tails about half the time.

file 20230406 694 oe75z5.jpg?ixlib=rb 1.1

Intriguingly, we found that the model can be taught to make relatively rational decisions using only a small set of example questions and answers.

At first blush, this would seem to suggest that the models can indeed do more than just “play” with language. Further experiments, however, showed that the situation is actually much more complex.

For instance, when we used cards or dice instead of coins to frame our bet questions, we found that performance dropped significantly, by over 25%, although it stayed above random selection.

So the idea that the model can be taught general principles of rational decision-making remains unresolved, at best.

More recent case studies that we conducted using ChatGPT confirm that decision-making remains a nontrivial and unsolved problem even for much bigger and more advanced large language models.

Getting the decision right

This line of study is important because rational decision-making under conditions of uncertainty is critical to building systems that understand costs and benefits.

By balancing expected costs and benefits, an intelligent system might have been able to do better than humans at planning around the supply chain disruptions the world experienced during the COVID-19 pandemic, managing inventory or serving as a financial adviser.

Our work ultimately shows that if large language models are used for these kinds of purposes, humans need to guide, review and edit their work.

And until researchers figure out how to endow large language models with a general sense of rationality, the models should be treated with caution, especially in applications requiring high-stakes decision-making.

Have any thoughts on this? Drop us a line below in the comments, or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

Editor’s Note: This article was written by Mayank Kejriwal, Research Assistant Professor of Industrial & Systems Engineering, University of Southern California, and republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

The post ChatGPT and other language AI’s are just as irrational as we are appeared first on KnowTechie.

]]>
https://knowtechie.com/chatgpt-and-other-language-ais-are-just-as-irrational-as-we-are/feed/ 0 The psychology behind irrational decisions - Sara Garofalo nonadult