Home > python > Automatically update spreadsheets and graphs – Part 2

Automatically update spreadsheets and graphs – Part 2

Finally got some time to work on this. So I modified the initial code I posted here. This “newer” version is able to work with the spreadsheet and worksheet name that you pass on to it.

The data I have in the csv file (mydata.csv) looks like this

Date, Toy Cars sold
2011-01-01, 91
2011-01-02, 92
2011-01-03, 93
2011-01-04, 94
2011-01-05, 95


shell>python an_example.py mydata.csv
shell>python an_example.py --user [username] --pw [password] --sname [spreadsheet name] --wname [worksheet name] mydata.csv
shell>python an_example.py --user [username] --pw [password] --cname [category] --sname [spreadsheet name] --wname [worksheet name] mydata.csv

If your email address is not in this format “user@gmail.com” then use the full email address for your google apps account “user@googleappsaccount.com”.


import csv
import getpass
import gdata.docs.service
import gdata.spreadsheet.service
import getopt
import sys
import urllib

def get_data(csvfile):
    """Read data from csv file.

    Read data from the given csv file and return it as a list.

    Keyword arguments:
    csvfile -- The csv file that has the data to be written to the worksheet file.

    reader = csv.reader(open(csvfile, "r"))

    data = []
    for row in reader:

    return data
def get_items(feed):
    """ Get the items in the feed.
    Either a list of documents that the user has or a list of worksheets within
    a given spreadsheet.

    Keyword arguments:
    feed -- The feed (xml file).
    items = {}
    for entry in feed.entry:
        id_parts = urllib.unquote(entry.id.text).replace(':','/').split('/')
        key = id_parts[len(id_parts) - 1]
        items[entry.title.text.lower()] = key
    return items

def usage():
    """ How this works """ 
    usage_text = """python an_example.py --user [username] --pw [password] 
                --cname [category] --sname [spreadsheet name] 
                --wname [worksheet name] csv"""
    print usage_text

# Parse command line options
    opts, args = getopt.getopt(sys.argv[1:], '', ['user=', 'pw=', 'cname=', 'sname=', 'wname='])
except getopt.error, msg:

user = ''
pw = ''
cname = ''
sname = ''
wname = ''

# Process options
for option, arg in opts:
    if option == '--user':
      user = arg
    elif option == '--pw':
      pw = arg
    elif option == '--cname':
      cname = arg
    elif option == '--sname':
      sname = arg
    elif option == '--wname':
      wname = arg

# Get the csv file
    data, = args
    sheet_data = get_data(data)
except ValueError:
    print "You must specify a csv file"
except IOError:
    print "You must specify a csv file"

while not user:
    print 'NOTE: Please run these tests only with a test account.'
    user = raw_input('Please enter your username: ')

while not pw:
    pw = getpass.getpass()
    if not pw:
        print 'Password cannot be blank.'

while not sname:
    sname = raw_input('Please enter a spreadsheet name: ')
    if not sname:
        print 'Spreadsheet name cannot be blank.'
while not wname:
    wname = raw_input('Please enter a worksheet name: ')
    if not wname:
        print 'Worksheet name cannot be blank.'

    category = cname
    email = user
    password = pw
    source = category 
    spreadsheet_name = sname.lower()
    worksheet_name = wname.lower()
except NameError:
    cname = ''

gd_client = gdata.docs.service.DocsService()
gd_client.ClientLogin(email, password, source=source)

gs_client = gdata.spreadsheet.service.SpreadsheetsService()
gs_client.email = email
gs_client.password = password

    # Get the SpreadSheet.
    if not category:
        feed = gd_client.GetDocumentListFeed()
        query = gdata.docs.service.DocumentQuery(categories=[category])
        feed = gd_client.Query(query.ToUri())

    docs = get_items(feed)

    #print docs # Comment this out to see the spreadsheets a user has.

    curr_key = docs[spreadsheet_name]

    # Get the WorkSheet within the SpreadSheet.
    feed = gs_client.GetWorksheetsFeed(curr_key)

    sheets = get_items(feed)
    #print sheets # Comment this out to see the worksheets a given spreadsheet has.

    curr_wksht_id = sheets[worksheet_name]

    # Write the data in the csv file to the given worksheet.
    rowx = 1
    for row in sheet_data:
        rowx += 1
        for colx, value in enumerate(row):
            colx += 1
            gs_client.UpdateCell(rowx, colx, value, curr_key, curr_wksht_id)

except KeyError:
    print "Sorry, the spreadsheet or worksheet name you specified does not exist."

All the code is from the sample files that come with the API. Mostly from these two files /gdata-2.0.14/samples/docs/docs_example.py and /gdata-2.0.14/samples/spreadsheets/spreadsheetExample.py

1. If you give the wrong spreadsheet key.

  • gdata.service.RequestError: {‘status’: 400, ‘body’: ‘The spreadsheet at this URL could not be found. Make sure that you have the right URL and that the owner of the spreadsheet hasn't deleted it.’, ‘reason’: ‘Bad Request’}

2. Attempting to write data to the wrong location.
In my case I was attempting to write to a cell at position row 1, col 0 instead of row 1, col 1. The error clearly stats R1C0.

  • gdata.service.RequestError: {‘status’: 400, ‘body’: ‘Invalid query parameter value for R1C0.’, ‘reason’: ‘Bad Request’}

3. Using wrong email address.
I was using the wrong email address. So I specified the full email address otherwise it assumes that your address is username@gmail.com.

  • gdata.service.Error: Server responded with a 403 code
Categories: python Tags: , ,
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: