In this part, I'll be extracting information on apartments from Craigslist search results. I'll be using BeautifulSoup to extract the relevant information from the HTML text.
For reference on CSS selectors, please see the notes from Week 6.
First we need to figure out how to submit a query to Craigslist. As with many websites, one way you can do this is simply by constructing the proper URL and sending it to Craigslist. Here's a sample URL that is returned after manually typing in a search to Craigslist:
http://philadelphia.craigslist.org/search/apa?bedrooms=1&pets_cat=1&pets_dog=1&is_furnished=1
There are two components to this URL:
http://philadelphia.craigslist.org/search/apa
?bedrooms=1&pets_cat=1&pets_dog=1&is_furnished=1
We will use requests.get()
function to get the search page's response. For the search parameters, we will set bedrooms=1
, which will make sure the number of bedrooms is listed.
This can be done easiest by using the params
keyword of the get()
function. We didn't cover this in the lecture, so I've went ahead and done the necessary steps.
import requests
url_base = 'http://philadelphia.craigslist.org/search/apa'
params = {'bedrooms': 1}
rsp = requests.get(url_base, params=params)
# Note that requests automatically created the right URL
print(rsp.url)
There should have a list of 120 elements, where each element is the listing for a specific apartment on the search page.
from bs4 import BeautifulSoup
soup = BeautifulSoup(rsp.content, 'html.parser')
selector = "#sortable-results > ul li"
rows = soup.select(selector)
len(rows)
We will now focus on the first element in the list of 120 apartments. Use the prettify()
function to print out the HTML for this first element.
From this HTML, identify the HTML elements that hold:
For the first apartment, print out each of these pieces of information, using BeautifulSoup to select the proper elements.
Hint: Each of these can be extracted using the text
attribute of the selected element object, except for the datetime string. This information is stored as an attribute of an HTML element and is not part of the displayed text on the webpage.
row1 = rows[0]
print(row1)
price = row1.select_one('span.result-price').text
print(price)
nbed_area = row1.select_one('span.housing').text
print(nbed_area)
title = row1.select_one('p > a').text
print(title)
datetime = row1.select_one('p > time')['datetime']
print(datetime)
In this section, I'll create two functions that take the price and time results from the last section and format them properly.
import re
def format_size_and_bedrooms(size_string):
"""
Extract size and number of bedrooms from the raw
text, using regular expressions
"""
split = re.findall("\n(.*?) -", size_string)
# both size and bedrooms are listed
if len(split) == 2:
n_brs = split[0].strip().replace('br', '')
this_size = split[1].strip().replace('ft2', '')
# only bedrooms is listed
elif 'br' in split[0]:
n_brs = split[0].strip().replace('br', '')
this_size = np.nan
# only size is listed
elif 'ft2' in split[0]:
# It's the size
this_size = split[0].strip().replace('ft2', '')
n_brs = np.nan
# return floats
return float(this_size), float(n_brs)
def format_price(price_string):
# Format the price string and return a float
# This will involve using the string.strip() function to remove unwanted characters
this_price = price_string.strip('$')
return float(this_price)
from datetime import datetime
def format_time(date_string):
# Return a Datetime object from the datetime string
datetime_object = pd.to_datetime(date_string, format = '%Y-%m-%d %H:%M')
#datetime_object = datetime_object.to_pydatetime()
return datetime_object
print(format_time(datetime))
In this part, I'll re-request information using results from previous parts. The code will loop over 5 pages of search results and scrape data for about 600 apartments.
In the code below, the outer for loop will loop over 5 pages of search results. The inner for loop will loop over the 120 apartments listed on each search page.
After filling in the missing pieces and executing the code cell, I got a Data Frame called results
that holds the data for 600 apartment listings.
Be careful if you try to scrape more listings. Craigslist will temporarily ban your IP address (for a very short time) if you scrape too much at once. I've added a sleep()
function to the for loop to wait 30 seconds between scraping requests.
If the for loop gets stuck at the "Processing page X..." step for more than a minute or so, your IP address is probably banned temporarily, and you'll have to wait a few minutes before trying again.
from time import sleep
import numpy as np
import pandas as pd
results = []
# search in batches of 120 for 5 pages
# NOTE: you will get temporarily banned if running more than ~5 pages or so
# the API limits are more leninient during off-peak times, and you can try
# experimenting with more pages
max_pages = 5
results_per_page = 120
search_indices = np.arange(0, max_pages*results_per_page, results_per_page)
url = 'http://philadelphia.craigslist.org/search/apa'
# loop over each page of search results
for i, s in enumerate(search_indices):
print('Processing page %s...' % (i+1) )
# get the response
resp = requests.get(url, params={'bedrooms': 1, 's': s})
# get the list of all aparements
# It should be a list of 120 apartments
apts = soup.select("#sortable-results > ul li")
print("number of apartments = ", len(apts))
# loop over each apartment in the list
page_results = []
for apt in apts:
sizes_brs = apt.select_one('span.housing').text # the bedrooms/size string
title = apt.select_one('p > a').text # the title string
price = apt.select_one('span.result-price').text # the price string
dtime = apt.select_one('p > time')['datetime'] # the time string
# format using functions from Part 1.3
sizes, brs = format_size_and_bedrooms(sizes_brs)
price = format_price(price)
dtime = format_time(dtime)
# save the result
page_results.append([dtime, price, sizes, brs, title])
# create a dataframe and save
col_names = ['time', 'price', 'size', 'brs', 'title']
df = pd.DataFrame(page_results, columns=col_names)
results.append(df)
print("sleeping for 30 seconds between calls")
sleep(30)
# Finally, concatenate all the results
results = pd.concat(results, axis=0)
results
Use matplotlib's hist()
function to make two histograms for:
Make sure to add labels to the respective axes and a title describing the plot.
from matplotlib import pyplot as plt
%matplotlib inline
fig, ax = plt.subplots(figsize=(8,6))
# Plot histogram
ax.hist(results['price'], bins=30, color = "skyblue")
ax.set_xlabel("Apartment Price", fontsize=18)
ax.set_ylabel("Number of Apartments", fontsize=18);
ax.set_title("Apartments vs Their Respective Prices in Philadelphia", fontsize=18);
The apartment prices in Philly have a distribution a bit skewed to the right, with mode at around $1500.
fig, ax = plt.subplots(figsize=(8,6))
# Plot histogram
pricesqft = results['price']/results['size']
ax.hist(pricesqft[~np.isnan(pricesqft)], bins=30, color = "skyblue")
ax.set_xlabel("Apartment Price per Square Foot", fontsize=18)
ax.set_ylabel("Number of Apartments", fontsize=18);
ax.set_title("Apartments vs Their Respective Prices/sqft in Philadelphia", fontsize=18);
From the scrapped data, the average rent price is centered around $1.7/sqft, however, there are quite a couple of apartments that have larger or smaller unit prices.
Use altair
to explore the relationship between price, size, and number of bedrooms. Make an interactive scatter plot of price (x-axis) vs. size (y-axis), with the points colored by the number of bedrooms.
Make sure the plot is interactive (zoom-able and pan-able) and add a tooltip with all of the columns in our scraped data frame.
With this sort of plot, you can quickly see the outlier apartments in terms of size and price.
import altair as alt
alt.renderers.enable('notebook')
chart = alt.Chart(results).mark_circle(size=50).encode(
x=alt.X('price:Q', axis=alt.Axis(title='Rent Price ($)')),
y=alt.Y('size:Q', axis=alt.Axis(title='Size of the Apartment (sqft)')),
color=alt.Color("brs",legend=alt.Legend(title="Number of Bedrooms")),
tooltip=[alt.Tooltip("price", title='price ($)'), alt.Tooltip("size", title='size (sqft)'), alt.Tooltip("brs", title='# of bedrooms'), "time", "title"],
).interactive().properties(
title= 'Apartment Size vs Rent Price in Philadelphia'
)
chart
From the plot above, the majority of the apartments are in the range of 500-1500 sqft with rent $1000-$3000. We can see a weak trend between size of the apartments and the rent price for apartments within this range. The larger apartments (with more bedrooms) tend to have smaller unit prices.