QuestionQuestion

Part 1:
The National UFO Reporting Database has an index of UFO sightings. For this project, you can use either all sightings or all North American sightings. Your first step will be to create a python script that extracts the location for each sighting in the database and creates a tally of sightings by city.

For testing, here is a page you can access as often as you want. This mimics the Events by Date listing. And it links to two months worth of data. The format of those sample pages is the same as on the database website. ONLY once your code works on those test pages should you move toward accessing the actual pages.

Many cities will appear multiple times. You do not want to list the cities over and over. Instead, store them in an object. You should create a class UFO that has variables for at least city, state, and number of reports. As you read in records, keep track of how many reports there are for each city. You will likely need a data structure to connect city and state names to your objects. When you get a new report for a city/state, increment the variable that tracks the number of reports.

At the end of this step, you should have a bunch of objects with cities and states (or city,province or city,country if you use international locations) where there were sightings and a count of each time that city appeared. Create an output file that has that data. Use a tab to separate the city and the count. Your output file should look like this:
Chicago, IL 5
Washington, DC 7

Deliverable: A python script called lastname_firstname_final_1.py that I can run by typing "python FILENAME". It should output a file called lastname_firstname_cities.txt that contains one city/state (or city,province or city,country if you use international locations) with the corresponding count on each line.r
You must include a sleep of at least 5 seconds between page accesses in order to receive credit for this part.

Part 2: Next, convert your list of cities to latitude/longitude coordinates. You can make that conversion with the Open Maps API. You will need a key which is covered in the API module of the course. At the end of this step, you should produce a list of latitudes and longitudes and a corresponding count of UFO sightings.

Deliverable: A python file called lastname_firstname_final_2.py that I can run by typing "python FILENAME". It should open your file lastname_firstname_cities.txt from the current directory (DO NOT put a full path to the file in your code. Just use the file name so it will work on my system). It should output a file lastname_firstname_latlon.txt that has one latitude/longitude pair on each line with its corresponding count that matches with the city/state on each line of your lastname_firstname_cities.txt file.

Solution PreviewSolution Preview

These solutions may offer step-by-step problem-solving explanations or good writing examples that include modern styles of formatting and construction of bibliographies out of text citations and references. Students may use these solutions for personal skill-building and practice. Unethical use is strictly forbidden.

class UFO(object):
    def __init__(self, main_url = main_url):
       self.main_url = main_url
      
    def make_urls(self):
       # function return urls of all tables in main page
       main_page = get(self.main_url) # get the page
       main_page_ = main_page.content # read page content
       main_page_content = html.fromstring(main_page_) # use xlm to convert page content to string
       self.urls = []
       for url in main_page_content.xpath('//a')[1:]: # take only hrefs from the page content using xpath('\\a)
            self.urls.append(url.attrib['href']) # append hrefs to the sel.urls list
       return self.urls
      
    def make_cities_dict(self):
       from time import sleep
       from collections import defaultdict
      
       self.urls = self.make_urls()
       self.sighting_dict = defaultdict(int) # make a default dictonary
      
       for url in self.urls: # iterate over all urls i.e. iterate over all tables
            report_page = get(url) # the same procedure as the one above to get and read page
            report_page_ = report_page.content
            report_page_content = html.fromstring(report_page_)
            rows = report_page_content.xpath('//tr') # xpath('//tr') extract only tables from the page content
            for row in rows[1:]: # skip the first row (headling)...

By purchasing this solution you'll be able to access the following files:
Solution.zip.

50% discount

Hours
Minutes
Seconds
$45.00 $22.50
for this solution

PayPal, G Pay, ApplePay, Amazon Pay, and all major credit cards accepted.

Find A Tutor

View available Python Programming Tutors

Get College Homework Help.

Are you sure you don't want to upload any files?

Fast tutor response requires as much info as possible.

Decision:
Upload a file
Continue without uploading

SUBMIT YOUR HOMEWORK
We couldn't find that subject.
Please select the best match from the list below.

We'll send you an email right away. If it's not in your inbox, check your spam folder.

  • 1
  • 2
  • 3
Live Chats