Scrape Google Inline Images with Python

Contents: intro, imports, what will be scraped, process, code, links, outro.


This blog post is a continuation of Google's web scraping series. Here you'll see how to scrape Inline Images using Python with beautifulsoup, requests, lxml, re, base64, BytesIO, PIL libraries. An alternative API solution will be shown.

Note: This blog post assumes that you're familiar with beautifulsoup, requests libraries and a basic understanding of regular expressions.


import requests, lxml, re, base64
from bs4 import BeautifulSoup 
from io import BytesIO # for decoding base64 image
from PIL import Image # for saving decoded image
from serpapi import GoogleSearch # alternative API solution

What will be scraped


Selecting container, link, and where photo being used.

Extracting thumbnail
To extract thumbnail, we need to look at <img> tag with id dimg_XX (XX - some number).

If you open source code (Ctrl + U) and try to find dimg_36 (or other digits depending on the HTML code) you'll see that there are 2 occurrences that will be found, and one of them will be somewhere in the <script> tags, that's what we need.

In order to extract thumbnails we need to use regex to get them from the <script> tags, because if you would parse data from a src attribute, the output you would get will be like this: data:image/gif;base64,R0lGODlhAQABAIAAAP///////yH5BAEKAAEALAAAAAABAAEAAAICTAEAOw== which is base64 encoded picture.

More about this topic could be found on Developer Mozilla

The regular expression is extremely simple:


Regular Expression explanation:

  1. looking for s='data:image/jpeg;base64,
  2. creating a capture group (.*?) which will grab everything, and ending with '; symbols.
  3. only the capture group will be extracted without other parts.

Screenshot to illustrate what is being captured by a regular expression which you can find here:

After that, the decoded base64 string can be saved using PIL module. More can be found on StackOverFlow answer.


import requests, lxml, re, urllib.parse, base64
from bs4 import BeautifulSoup
from PIL import Image
from io import BytesIO

headers = {
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"

params = {
    "q": "minecraft shareds photo",
    "sourceid": "chrome",

html = requests.get("", params=params, headers=headers)
soup = BeautifulSoup(html.text, 'lxml')

for result in'div[jsname=dTDiAc]'):
    link = f"{result.a['href']}"
    being_used_on = result['data-lpage']
    print(f'Link: {link}\nBeing used on: {being_used_on}\n')

# finding all script (<script>) tags
script_img_tags = soup.find_all('script')

img_matches = re.findall(r"s='data:image/jpeg;base64,(.*?)';", str(script_img_tags))

for index, image in enumerate(img_matches):
        final_image =

        #'your/absolute_or_relative/path/inline_image_{index}.jpg', 'JPEG')

# part of the output:
Being used on:

Being used on:

Saved images in the background:

GIF to illustrate the output:

SerpApi is a paid API with a free trial of 5,000 searches.

The biggest difference is that you don't have to figure out from where to parse certain elements in order to get a proper image size since it's already done for the end-user. Other than that, there's no need to maintaining the parser or finding ways if your script request gets blocked.

import json
from serpapi import GoogleSearch

params = {
  "api_key": "YOUR_API_KEY",
  "engine": "google",
  "q": "minecraft shaders photo",
  "hl": "en",

search = GoogleSearch(params)
results = search.get_dict()

print(json.dumps(results['inline_images'], indent=2, ensure_ascii=False))

    "link": "/search?q=minecraft+shaders+photo&hl=en&tbm=isch&source=iu&ictx=1&fir=bwVoAE4HTl8GXM%252Cz3y5GvasoN8hFM%252C_&vet=1&usg=AI4_-kRfUHjrz711om99elb_i3GwJuTBnw&sa=X&ved=2ahUKEwit6Jq38PHxAhUkSTABHfJyCn8Q9QF6BAgWEAE#imgrc=bwVoAE4HTl8GXM",
    "thumbnail": ""



If you have any questions or something isn't working correctly or you want to write something else, feel free to drop a comment in the comment section or via Twitter at @serp_api.

Dimitry, and the rest of SerpApi Team.