Skip to content
  • Home
  • Services
    • Combo Plans
      • Shared Support
      • Semi Dedicated Support
      • Dedicated Support
    • Control Panel Support
      • DirectAdmin Support
      • cPanel Support
      • Plesk Support
    • Server Setup & Support
  • Products
    • Managed Migration
    • Nginx Support
    • Varnish Support
    • Shield
  • Careers
    • Internship Programme
  • KnowledgeBase
  • Contact Us
  • Blog
  • About Us

WHM-cPanel Control Panel

4
  • How to log in to cPanel?
  • How to find outdated services in cPanel
  • Exclude services from the outdated services script’s check in cPanel
  • catch-all accounts, and how to add that in cPanel

Email

3
  • How to install mail function on a Centos server.
  • SPF, DKIM, and DMARC records explained
  • How to configure an Email account manually on the Thunderbird client

SSL certificates

3
  • How to install SSL on nginx with Let’s Encrypt
  • SSL installation and renewal on an Nginx server
  • Installation of SSL in Haproxy and HTTP to HTTPS Redirection

Webservers

1
  • Enable GZIP compression in Nginx

Security

1
  • Disabling mail alerts from LFD

News

46
  • White House report on Open-source AI models
  • Linux Kernel 6.9 – End of Life
  • Verify if SystemD or not on Linux
  • SLUBStick: A Groundbreaking Kernel Exploitation Technique
  • Top 5 Linux Distros for Every User Level
  • Alpaca: The Open-Source AI Model for Linux Enthusiasts
  • Zero-Day Threat: The Risks of the IP Address Exploit
  • Intel Engineer Boosts Linux Kernel Boot Time by 0.035 Seconds
  • Canonical’s New Aggressive Kernel Policy: Ubuntu Releases
  • KDE Frameworks 6.5 Released: What’s New?
  • Ubuntu 24.10 to Feature the Latest Linux Kernel
  • Linux 6.11-rc3 Released: Performance Boosts and Key Fixes
  • The Open Model Initiative Joins the Linux Foundation
  • New Study Debunks Fears of AI Threats
  • AI-Powered Screenshot Search for Pixel 9 Devices
  • Fastfetch: A High-Performance Alternative to Neofetch
  • Containers: Efficient OS-Level Virtualization & Top Tools
  • Tails 6.6-Enhanced Security, Anonymity,Decentralized Features
  • Kubernetes 1.31 “Elli” Release: New Features and Updates
  • Unleashing the Power of Arch Linux with Archinstall 2.8.2
  • Deepin 23 Released with Atomic Updates and Broad CPU Support
  • Secure Your Linux Systems with Expert Cybersecurity
  • NGINX Fixes Buffer Overread Vulnerability (CVE-2024-7347)
  • Linux Kernel Flaw Lets Attackers Bypass CPU and Write to Memory
  • 0.0.0.0 Day Vulnerability:18-Year-Old Browser Flaw
  • CachyOS August 2024: Open NVIDIA Modules & COSMIC Desktop
  • Windows Update Breaks Linux Boot on Dual-Boot Systems
  • EasyOS 6.2 Released: Lightweight Linux Distro-New Features
  • Upstreaming Linux Kernel Support for Raspberry Pi 5
  • Canonical Pauses Ubuntu Kernel Updates Until October 2024
  • 9 Years of LVFS: Transforming Linux Firmware Updates
  • Record Bounty Awarded for LiteSpeed Cache Vulnerability
  • Red Hat OpenShift Lightspeed: AI Assistant for OpenShift
  • Linux 6.12 Introduces QR Code in DRM Panic Handler
  • Rust for Linux Maintainer Resigns Amid ‘Nontechnical Nonsense’
  • Ubuntu 24.10 to Feature Latest Linux 6.11 Kernel
  • LinkedIn Migrates from CentOS to Azure Linux
  • Linux 6.11-rc5 Released with Streamlined Bcachefs Fixes
  • AMD Preferred Core Fix Arrives Before Linux 6.11-rc6
  • Debian 12.7 Released: 55 Security Updates, 51 Bug Fixes
  • 4MLinux 46 Released: New Apps, LAMP Server Included
  • Cicada3301 Ransomware Targets VMware ESXi Systems
  • Advanced Linux Persistence Techniques: Elastic Security’s Insights
  • Sedexp Malware Evades Detection for Two Years on Linux
  • Minimal Linux Runs on Raspberry Pi’s RP2350 Microcontroller
  • Manage Linux User Activity with Acct/Psacct

Operating System

1
  • CentOS 7 End-of-Life: What You Need to Know and How to Migrate
  • Home
  • KnowledgeBase
  • News
  • New Study Debunks Fears of AI Threats
View Categories

New Study Debunks Fears of AI Threats

2 min read

New research reveals that large language models (LLMs), such as ChatGPT, cannot learn independently or acquire new skills without explicit instructions. This finding dispels the growing fears that these AI models could develop complex reasoning abilities, potentially posing existential threats to humanity. The study emphasizes that while LLMs can generate sophisticated language, they remain inherently predictable and controllable, with no evidence supporting the idea that they could autonomously gain complex thinking skills.

AI

Key Findings: LLMs Are Controllable, Not Threatening #

The study, published today at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), conducted a thorough examination of LLMs’ capabilities. Researchers from the University of Bath and the Technical University of Darmstadt in Germany discovered that LLMs excel at language proficiency and following instructions. However, they lack the ability to master new skills without explicit direction. This makes them predictable and controllable, significantly reducing the perceived threat they might pose.

Breaking Down the Myth of Emergent Abilities #

A central focus of the research was to test whether LLMs could exhibit “emergent abilities,” or the capacity to solve novel problems without prior training. Previous studies had suggested that LLMs might be developing these skills autonomously, leading to concerns about their potential dangers. However, the new research refutes these claims, showing that LLMs’ abilities are not as advanced as some had feared.

The researchers conducted thousands of experiments to assess the true capabilities of LLMs. They found that the models’ apparent ability to handle unfamiliar tasks was not due to emergent reasoning but rather a result of their proficiency in following instructions and utilizing memory. The phenomenon known as “in-context learning” (ICL) allows LLMs to perform tasks based on examples provided, but it does not equate to the model developing new skills or understanding.

Addressing Misuse: The Real AI Challenge #

While the study reassures that LLMs are unlikely to pose existential threats, it highlights the need to focus on the genuine risks associated with AI. One of the primary concerns is the potential misuse of these models to generate fake news, manipulate information, or facilitate fraud. These issues, the researchers argue, require immediate attention and responsible regulation.

Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the study, emphasized the importance of shifting the narrative around AI risks. “The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies,” he said. “It also diverts attention from the genuine issues that require our focus, such as the misuse of AI for harmful purposes.”

What This Means for AI Users #

For users and developers of AI, the study’s findings offer clear guidance. Relying on LLMs to perform complex tasks without explicit instructions is likely to lead to errors or misunderstandings. Instead, users should provide detailed prompts and examples to guide the model’s output, ensuring more accurate and reliable results.

Professor Iryna Gurevych, who led the research team at the Technical University of Darmstadt, added, “Our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence. We can control the learning process of LLMs very well after all.”

Moving Forward: Focusing on Real Risks #

As LLMs continue to evolve, the research community must prioritize addressing the real challenges they pose, such as the potential for misuse. The study’s authors call for future research to focus on these risks and for regulations to be based on evidence rather than fear.

In conclusion, while large language models like ChatGPT are powerful tools with impressive language capabilities, they are not autonomous entities with the ability to think or reason independently. Their development remains firmly under human control, and the real challenge lies in ensuring their safe and responsible use.

Share This Article :
  • Facebook
  • X
  • LinkedIn
  • Pinterest
Still stuck? How can we help?

How can we help?

Updated on August 14, 2024

Powered by BetterDocs

Table of Contents
  • Key Findings: LLMs Are Controllable, Not Threatening
  • Breaking Down the Myth of Emergent Abilities
  • Addressing Misuse: The Real AI Challenge
  • What This Means for AI Users
  • Moving Forward: Focusing on Real Risks

The last technical support you will ever need!

select one of our plans and start building the most wanted app/website available today. We make sure every aspect of the server maintenance are handled with  a level of expertise needed for growing your business!

Copyright 2024 techprovidence