Scrape DuckDuckGo Organic News Results with Python

Contents: intro, imports, what will be scraped, process, code, links, outro.


This blog post is a continuation of DuckDuckGo web scraping series. Here you'll see how to scrape News Results from Organic Search using Python with selenium library. An alternative API solution will be shown.

Prerequisites: familiarity with selenium library and regular expressions.


from selenium import webdriver
import urllib.parse, re

What will be scraped


Grabbing container, title, link, source, date published CSS selectors.

To get an accurate selection you need to specify id selector followed by a class selector. CSS selectors reference.

SelectorGadget extension was used to grab CSS selectors.

Extract and decode thumbnail URL

A convenient way to extract thumbnail is to use regex which you can see in action on regex101 or in the online IDE.

You can use either:

re.findall(pattern, string) # returns an array
re.finditer(pattern, string) # returns an iterator

Basically, when using findall() method you can't specify .group() including named groups while finditer() method can.

Screenshot to show what is being captured with a regular expression:

Note: I'm not sure if it great overall regular expression for this task, but it works.

After that, URL needs to be decoded with urllib.parse.unquote() method, that's an easy one.

# encoded
# decoded


from selenium import webdriver
import urllib.parse, re

driver = webdriver.Chrome(executable_path='C:/Users/dimit/PycharmProjects/pythonProject/Scrape Search Engines/Walmart/chromedriver.exe')
driver.get(' musk&kl=us-en&ia=web')

for result in driver.find_elements_by_css_selector('#m1-0 .has-image'):
    title = result.find_element_by_css_selector('#m1-0 .js-carousel-item-title').text.strip()
    link = result.find_element_by_css_selector('#m1-0 .js-carousel-item-title').get_attribute('href')
    source = result.find_element_by_css_selector('#m1-0 .result__url').text
    date = result.find_element_by_css_selector('#m1-0 .tile__time').text
    thumbnail_encoded = result.find_element_by_css_selector('#m1-0 .module--carousel__image').get_attribute('style')

    match_thumbnail_urls = ''.join(re.findall(r'background-image: url\(\"\/\/external-content\.duckduckgo\.com\/iu\/\?u=(.*)&f=1&h=110\"\);', thumbnail_encoded))

    thumbnail = urllib.parse.unquote(match_thumbnail_urls)


Elon Musk admits Tesla's Cybertruck could flop

SerpApi is a paid API with a free plan.

Here you'll see the difference is that there's no need to figuring out how to scrape thumbnails if they're needed. All that needs to be done is to iterate over a structured JSON string.

from serpapi import GoogleSearch
import json # for pretty printing

params = {
  "api_key": "YOUR_API_KEY",
  "engine": "duckduckgo",
  "q": "elon musk",
  "kl": "us-en"

search = GoogleSearch(params)
results = search.get_dict()

print(json.dumps(results['news_results'], indent=2, ensure_ascii=False))

    "position": 1,
    "title": "Elon Musk admits Tesla's Cybertruck could flop",
    "link": "",
    "snippet": "Tesla CEO Elon Musk admitted Thursday on Twitter that the Cybertruck might flop but said he doesn't care because he loves its unusual trapezoid-like design.",
    "source": "CNBC",
    "date": "4 hours ago",
    "thumbnail": ""



If you have any questions or something isn't working correctly or you want to write something else, feel free to drop a comment in the comment section or via Twitter at @serp_api.

Dimitry, and the rest of SerpApi Team.