Looks like the Great Firewall or something like it is preventing you from completely loading www.skritter.com because it is hosted on Google App Engine, which is periodically blocked. Try instead our mirror:
This might also be caused by an internet filter, such as SafeEyes. If you have such a filter installed, try adding appspot.com to the list of allowed domains.
Vocabs store all the general and user-specific information about a given word. Use this endpoint if you would like to:
If you're downloading a user's entire collection of Items, fetch the Items by themselves first, then fetch related Vocabs in batches through this URL. Otherwise, use the include_vocabs parameter while Item fetching.
Fetch all Vocabs related to the Items you fetched previously.
Solution: make id fetches, 100 at a time.
# given a slew of Items, determine the Vocab ids we need. to_fetch = set() for item in items: for vocab_id in item['vocabIds']: to_fetch.add(vocab_id) to_fetch = list(to_fetch) # so we can do iteration below # prepare the request GET parameters params = { 'bearer_token': token, 'gzip': False } vocabs = [] # where we'll be storing the results while to_fetch: # get the first 100 Vocabs (the max allowed per request) subset = to_fetch[:100] params['ids'] = ','.join(subset) # make repeated calls to fetch everything. response = client.get('http://legacy.skritter.com/api/v0/vocabs', params) response = json.loads(response.content) # gather up the results vocabs += response['Vocabs'] to_fetch = to_fetch[100:] print 'fetched %d vocabs, %d left to go' % (len(vocabs), len(to_fetch)) return vocabs
Keep an up-to-date record of all custom data (custom definitions, stars, etc) for the user.
Use offset to fetch only Vocabs whose user-specific properties have changed since we last checked.
# given a list of Vocabs we already have, fetch the latest updates vocab_dict = dict([(vocab['id'], vocab) for vocab in vocabs]) offset = max([vocab['changed'] for vocab in vocabs]) if vocabs else 0 # prepare the request GET parameters params = { 'sort': 'all', 'gzip': False, 'offset':offset } while True: # make repeated calls to fetch everything that's new. response = client.get('http://legacy.skritter.com/api/v0/vocabs', params) response = json.loads(response.content) # update and extend our local data for vocab in response['Vocabs']: vocab_dict[vocab['id']] = vocab # continue only if the server says there could be more if not 'cursor' in response: break # for the next iteration, provide the cursor from the last response params['cursor'] = response['cursor'] return vocab_dict
Search the database for some Vocabs that contain 中
Use the 'q' query parameter to search for this word.
# prepare the request GET parameters params = { 'q': u'中', 'gzip': False, 'containing_cursor': True 'limit':1, # ensure only one result is fetched } # just get one batch of containing words or sentences response = client.get('http://legacy.skritter.com/api/v0/vocabs', params) response = json.loads(response.content) if not response['Vocabs']: print 'word not found' else: # print search results for vocab in response['ContainingVocabs']: print vocab['writing'], vocab['reading'], vocab['definitions'].get('en')