banner



How To Get Notifications When New Files Uploaded To Google Drive

· 14 min read · Updated feb 2022 · Application Programming Interfaces

Google Bulldoze enables you to store your files in the cloud, which you tin admission anytime and everywhere in the earth. In this tutorial, you lot will larn how to listing your Google drive files, search over them, download stored files, and even upload local files into your bulldoze programmatically using Python.

Here is the table of contents:

  • Enable the Drive API
  • List Files and Directories
  • Upload Files
  • Search for Files and Directories
  • Download Files

To become started, let's install the required libraries for this tutorial:

          pip3 install google-api-python-client google-auth-httplib2 google-auth-oauthlib tabulate requests tqdm        

Enable the Drive API

Enabling Google Drive API is very like to other Google APIs such as Gmail API, YouTube API, or Google Search Engine API. First, yous need to have a Google business relationship with Google Bulldoze enabled. Head to this folio and click the "Enable the Drive API" push every bit shown beneath:

Enable the Drive API

A new window will pop up; choose your type of application. I will stick with the "Desktop app" and and so hit the "Create" button. Later on that, you'll see another window appear saying you're all set:

Drive API is enabled

Download your credentials by clicking the "Download Client Configuration" push button and so "Done".

Finally, you demand to put credentials.json that is downloaded into your working directories (i.e., where you execute the upcoming Python scripts).

Listing Files and Directories

Before we do anything, we demand to cosign our lawmaking to our Google business relationship. The below function does that:

          import pickle import os from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request from tabulate import tabulate  # If modifying these scopes, delete the file token.pickle. SCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly']  def get_gdrive_service():     creds = None     # The file token.pickle stores the user's admission and refresh tokens, and is     # created automatically when the authorization flow completes for the first     # time.     if os.path.exists('token.pickle'):         with open('token.pickle', 'rb') as token:             creds = pickle.load(token)     # If at that place are no (valid) credentials bachelor, let the user log in.     if not creds or not creds.valid:         if creds and creds.expired and creds.refresh_token:             creds.refresh(Asking())         else:             menstruum = InstalledAppFlow.from_client_secrets_file(                 'credentials.json', SCOPES)             creds = flow.run_local_server(port=0)         # Save the credentials for the next run         with open('token.pickle', 'wb') as token:             pickle.dump(creds, token)     # return Google Bulldoze API service     return build('drive', 'v3', credentials=creds)        

We've imported the necessary modules. The to a higher place function was grabbed from the Google Bulldoze quickstart page. It basically looks for token.pickle file to authenticate with your Google account. If it didn't detect it, information technology'd use credentials.json to prompt you for authentication in your browser. After that, it'll initiate the Google Drive API service and return it.

Going to the main role, let'southward define a function that lists files in our drive:

          def master():     """Shows basic usage of the Drive v3 API.     Prints the names and ids of the first 5 files the user has access to.     """     service = get_gdrive_service()     # Telephone call the Drive v3 API     results = service.files().listing(         pageSize=v, fields="nextPageToken, files(id, proper name, mimeType, size, parents, modifiedTime)").execute()     # get the results     items = results.go('files', [])     # list all 20 files & folders     list_files(items)        

So we used service.files().list() function to return the offset five files/folders the user has access to by specifying pageSize=v, we passed some useful fields to the fields parameter to get details well-nigh the listed files, such as mimeType (blazon of file), size in bytes, parent directory IDs, and the terminal modified date time. Cheque this folio to come across all other fields.

Detect we used list_files(items) function, we didn't define this function nonetheless. Since results are now a list of dictionaries, information technology isn't that readable. We pass items to this function to print them in human-readable format:

          def list_files(items):     """given items returned by Google Bulldoze API, prints them in a tabular style"""     if not items:         # empty drive         print('No files found.')     else:         rows = []         for item in items:             # get the File ID             id = item["id"]             # get the name of file             name = detail["name"]             attempt:                 # parent directory ID                 parents = detail["parents"]             except:                 # has no parrents                 parents = "N/A"             try:                 # get the size in nice bytes format (KB, MB, etc.)                 size = get_size_format(int(item["size"]))             except:                 # not a file, may be a folder                 size = "N/A"             # get the Google Drive blazon of file             mime_type = item["mimeType"]             # get last modified date time             modified_time = item["modifiedTime"]             # append everything to the list             rows.append((id, name, parents, size, mime_type, modified_time))         print("Files:")         # convert to a human being readable table         table = tabulate(rows, headers=["ID", "Proper noun", "Parents", "Size", "Blazon", "Modified Time"])         # print the tabular array         print(table)        

Nosotros converted that list of dictionaries items variable into a list of tuples rows variable, and and then pass them to tabulate module we installed earlier to print them in a nice format, let'south telephone call principal() function:

          if __name__ == '__main__':     main()        

Meet my output:

          Files: ID                                 Proper name                            Parents                  Size      Type                          Modified Time ---------------------------------  ------------------------------  -----------------------  --------  ----------------------------  ------------------------ 1FaD2BVO_ppps2BFm463JzKM-gGcEdWVT  some_text.txt                   ['0AOEK-gp9UUuOUk9RVA']  31.00B    text/plain                    2020-05-15T13:22:20.000Z 1vRRRh5OlXpb-vJtphPweCvoh7qYILJYi  google-drive-512.png            ['0AOEK-gp9UUuOUk9RVA']  15.62KB   epitome/png                     2020-05-14T23:57:xviii.000Z 1wYY_5Fic8yt8KSy8nnQfjah9EfVRDoIE  bbc.zippo                         ['0AOEK-gp9UUuOUk9RVA']  863.61KB  application/10-zip-compressed  2019-08-19T09:52:22.000Z 1FX-KwO6EpCMQg9wtsitQ-JUqYduTWZub  Nasdaq 100 Historical Data.csv  ['0AOEK-gp9UUuOUk9RVA']  363.10KB  text/csv                      2019-05-17T16:00:44.000Z 1shTHGozbqzzy9Rww9IAV5_CCzgPrO30R  my_python_code.py               ['0AOEK-gp9UUuOUk9RVA']  i.92MB    text/x-python                 2019-05-13T14:21:10.000Z        

These are the files in my Google Drive. Notice the Size column are scaled in bytes; that's because nosotros used get_size_format() function in list_files() function, hither is the code for it:

          def get_size_format(b, factor=1024, suffix="B"):     """     Scale bytes to its proper byte format     e.grand:         1253656 => '1.20MB'         1253656678 => '1.17GB'     """     for unit of measurement in ["", "K", "G", "One thousand", "T", "P", "E", "Z"]:         if b < gene:             return f"{b:.2f}{unit}{suffix}"         b /= factor     return f"{b:.2f}Y{suffix}"        

The above function should be defined before running the primary() method. Otherwise, it'll raise an error. For convenience, cheque the full code.

Remember after you lot run the script, yous'll be prompted in your default browser to select your Google business relationship and let your application for the scopes you specified earlier, don't worry, this volition merely happen the get-go time yous run it, and then token.pickle will exist saved and volition load authentication details from there instead.

Note: Sometimes, you lot'll encounter a "This application is not validated" warning (since Google didn't verify your app) after choosing your Google account. It'due south okay to go "Avant-garde" section and permit the application to your account.

Upload Files

To upload files to our Google Drive, nosotros demand to change the SCOPES listing nosotros specified earlier, nosotros need to add together the permission to add files/folders:

          from __future__ import print_function import pickle import bone.path from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request from googleapiclient.http import MediaFileUpload  # If modifying these scopes, delete the file token.pickle. SCOPES = ['https://world wide web.googleapis.com/auth/drive.metadata.readonly',           'https://world wide web.googleapis.com/auth/bulldoze.file']        

Unlike scope means unlike privileges, and y'all demand to delete token.pickle file in your working directory and rerun the code to authenticate with the new scope.

We will utilise the same get_gdrive_service() function to cosign our account, permit's make a function to create a folder and upload a sample file to information technology:

          def upload_files():     """     Creates a folder and upload a file to it     """     # cosign account     service = get_gdrive_service()     # folder details nosotros want to make     folder_metadata = {         "name": "TestFolder",         "mimeType": "awarding/vnd.google-apps.folder"     }     # create the folder     file = service.files().create(trunk=folder_metadata, fields="id").execute()     # get the folder id     folder_id = file.get("id")     print("Folder ID:", folder_id)     # upload a file text file     # showtime, ascertain file metadata, such equally the name and the parent folder ID     file_metadata = {         "name": "exam.txt",         "parents": [folder_id]     }     # upload     media = MediaFileUpload("test.txt", resumable=True)     file = service.files().create(body=file_metadata, media_body=media, fields='id').execute()     impress("File created, id:", file.get("id"))        

We used service.files().create() method to create a new folder, we passed the folder_metadata dictionary that has the type and the proper noun of the binder nosotros want to create, we passed fields="id" to retrieve folder id so we tin upload a file into that folder.

Side by side, we used MediaFileUpload grade to upload the sample file and laissez passer it to the same service.files().create() method, make sure y'all have a examination file of your pick chosen examination.txt, this fourth dimension we specified the "parents" attribute in the metadata lexicon, nosotros but put the folder we just created. Let's run it:

          if __name__ == '__main__':     upload_files()        

After I ran the code, a new binder was created in my Google Drive:

A folder created using Google Drive API in Python And indeed, after I enter that binder, I run into the file nosotros just uploaded:

File Uploaded using Google Drive API in Python We used a text file for demonstration, but you lot tin can upload whatever type of file you want. Check the full code of uploading files to Google Drive.

Search for Files and Directories

Google Bulldoze enables us to search for files and directories using the previously used list() method just past passing the 'q' parameter, the below function takes the Drive API service and query and returns filtered items:

          def search(service, query):     # search for the file     upshot = []     page_token = None     while Truthful:         response = service.files().list(q=query,                                         spaces="drive",                                         fields="nextPageToken, files(id, name, mimeType)",                                         pageToken=page_token).execute()         # iterate over filtered files         for file in response.get("files", []):             result.append((file["id"], file["name"], file["mimeType"]))         page_token = response.get('nextPageToken', None)         if non page_token:             # no more files             break     return outcome        

Let'southward encounter how to use this function:

          def main():     # filter to text files     filetype = "text/plain"     # authenticate Google Bulldoze API     service = get_gdrive_service()     # search for files that has type of text/patently     search_result = search(service, query=f"mimeType='{filetype}'")     # convert to table to print well     tabular array = tabulate(search_result, headers=["ID", "Name", "Type"])     print(table)        

Then we're filtering text/apparently files here by using "mimeType='text/plain'" as query parameter, if you desire to filter by name instead, you tin merely use "name='filename.ext'" as query parameter. Come across Google Bulldoze API documentation for more detailed information.

Let's execute this:

          if __name__ == '__main__':     main()        

Output:

          ID                                 Name           Blazon ---------------------------------  -------------  ---------- 15gdpNEYnZ8cvi3PhRjNTvW8mdfix9ojV  exam.txt       text/plain 1FaE2BVO_rnps2BFm463JwPN-gGcDdWVT  some_text.txt  text/evidently        

Check the full code hither.

Related: How to Utilise Gmail API in Python.

Download Files

To download files, we need first to get the file nosotros desire to download. We tin can either search for it using the previous code or manually get its drive ID. In this section, we gonna search for the file by name and download it to our local disk:

          import pickle import os import re import io from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Asking from googleapiclient.http import MediaIoBaseDownload import requests from tqdm import tqdm  # If modifying these scopes, delete the file token.pickle. SCOPES = ['https://www.googleapis.com/auth/drive.metadata',           'https://www.googleapis.com/auth/drive',           'https://www.googleapis.com/auth/drive.file'           ]        

I've added two scopes hither. That's because we need to create permission to make files shareable and downloadable. Here is the main function:

          def download():     service = get_gdrive_service()     # the name of the file you want to download from Google Bulldoze      filename = "bbc.zip"     # search for the file by name     search_result = search(service, query=f"proper noun='{filename}'")     # get the GDrive ID of the file     file_id = search_result[0][0]     # make it shareable     service.permissions().create(trunk={"role": "reader", "type": "anyone"}, fileId=file_id).execute()     # download file     download_file_from_google_drive(file_id, filename)        

You lot saw the first three lines in previous recipes. Nosotros just authenticate with our Google account and search for the desired file to download.

Afterward that, nosotros extract the file ID and create new permission that will let us to download the file, and this is the same as creating a shareable link button in the Google Bulldoze web interface.

Finally, we apply our defined download_file_from_google_drive() role to download the file, there you have information technology:

          def download_file_from_google_drive(id, destination):     def get_confirm_token(response):         for key, value in response.cookies.items():             if central.startswith('download_warning'):                 return value         return None      def save_response_content(response, destination):         CHUNK_SIZE = 32768         # become the file size from Content-length response header         file_size = int(response.headers.become("Content-Length", 0))         # extract Content disposition from response headers         content_disposition = response.headers.get("content-disposition")         # parse filename         filename = re.findall("filename=\"(.+)\"", content_disposition)[0]         print("[+] File size:", file_size)         impress("[+] File name:", filename)         progress = tqdm(response.iter_content(CHUNK_SIZE), f"Downloading {filename}", total=file_size, unit="Byte", unit_scale=True, unit_divisor=1024)         with open up(destination, "wb") as f:             for chunk in progress:                 if chunk: # filter out keep-alive new chunks                     f.write(chunk)                     # update the progress bar                     progress.update(len(chunk))         progress.close()      # base URL for download     URL = "https://docs.google.com/uc?export=download"     # init a HTTP session     session = requests.Session()     # make a request     response = session.get(URL, params = {'id': id}, stream=True)     impress("[+] Downloading", response.url)     # go confirmation token     token = get_confirm_token(response)     if token:         params = {'id': id, 'ostend':token}         response = session.get(URL, params=params, stream=True)     # download to disk     save_response_content(response, destination)                  

I've grabbed a part of the to a higher place code from downloading files tutorial; it is simply making a GET asking to the target URL we constructed by passing the file ID every bit params in session.get() method.

I've used the tqdm library to impress a progress bar to see when it'll finish, which volition become handy for large files. Let's execute it:

          if __name__ == '__main__':     download()        

This will search for the bbc.zip file, download it and save it in your working directory. Check the full lawmaking.

Conclusion

Alright, in that location y'all have it. These are basically the core functionalities of Google Drive. At present you know how to practice them in Python without manual mouse clicks!

Remember, whenever you change the SCOPES list, you need to delete token.pickle file to authenticate to your account again with the new scopes. See this page for further information, along with a list of scopes and their explanations.

Feel gratuitous to edit the code to accept file names as parameters to download or upload them. Get and try to brand the script as dynamic as possible by introducing argparse module to make some useful scripts. Allow's see what y'all build!

Below is a listing of other Google APIs tutorials, if you want to bank check them out:

  • How to Excerpt Google Trends Data in Python.
  • How to Apply Google Custom Search Engine API in Python.
  • How to Extract YouTube Data using YouTube API in Python.
  • How to Use Gmail API in Python.

Happy Coding ♥

View Full Code


Read Too


How to Use Google Custom Search Engine API in Python

How to Download and Upload Files in FTP Server using Python

How to Extract Google Trends Data in Python


Comment panel

Source: https://www.thepythoncode.com/article/using-google-drive--api-in-python

Posted by: smithdified81.blogspot.com

0 Response to "How To Get Notifications When New Files Uploaded To Google Drive"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel