Python - RSS, not just for podcasts.

Earthquakes, hurricanes, volcanoes, ice storms, BLOG, and much more. There's more to RSS than just podcasts.
September 23, 2020

This time, I'm showing sample python code for working with RSS feeds, and what kind of information can be obtained by using them.

Python - RSS, not just for podcasts.

Earthquakes, hurricanes, volcanoes, ice storms, BLOG, and much more. There's more to RSS than just podcasts.
September 23, 2020

This time, I'm showing sample python code for working with RSS feeds, and what kind of information can be obtained by using them.


Python - RSS, not just for podcasts.

Earthquakes, hurricanes, volcanoes, ice storms, BLOG, and much more. There's more to RSS than just podcasts.
September 23, 2020

This time, I'm showing sample python code for working with RSS feeds, and what kind of information can be obtained by using them.


What is RSS

RSS stands for Really Simple Syndication. Put simply, it's a protocol based on xml, that is used to send large amounts of information.

Over the past few years, the use of RSS has fallen out of favor among many developers, favoring proprietary APIs instead. There are even a few people who say that RSS is obsolete and should NOT be used for anything.

But RSS can still be used to obtain a vast amounts of interesting, and useful, information about a great many subjects. Like information from the NATIONAL HURRICANE CENTER and CENTRAL PACIFIC HURRICANE CENTER. It can be used to download and filter current news stories ( called a news aggregator ). It can be used to download earthquake data, weather forcasts, news about volcanoes, and more.

And... it still can be used to download directory listings of podcasts, which is what most people know it used for.

Yes, you can find the exact same info using your web browser, or one of the many RSS Feed readers which can be downloaded for FREE.
So, why do it in python?

The python code that I'm showing in the post is intended to be a starting point for automated applications like an alarm if a news story appears on the web about a particular person, or to automatically import earthquake info into a database. Using RSS for this sort of thing is much simpler, and easier to maintain, than say, a webpage scrapper.

Plus, it's just an intreasting subject matter.

More info about RSS, it's history, and standards, can be found at https://en.wikipedia.org/wiki/RSS

RSS Feeds - A service that supplies data in an RSS format is called an RSS Feed. Doing an internet search for the words 'RSS Feeds' returns hundreds of them like News Feeds, podcasts, business data, and weather. The RSS feeds that I am showcasing in this post are just a short sample, but they all working in much the same way. So once you understanding how one works, you have mastered them all.



ISO8601 and UTC timezone

The date/time values used in many RSS feeds are in ISO8601 format ( yyyy-mm-ddThh:mm:ss.xxxxZ ).

The letter ‘Z’ at the end, indicates that this is UTC time; meaning that this is the time in Greenwich England, and not your local timezone.

So for example, if you live in Chicago (CDT timezone), and Daylight savings time is not in effect, you would subtract 4 hours from the time shown, to get your local time.

The TIMEBIE website has a conversion tool that can allow you to easily convert from UTC time into your local timezone, and back again. It will also allow you to print a conversion table which many people may find handy. The TIMEBIE website is http://www.timebie.com

For more info about the ISO8601 time format, see https://en.wikipedia.org/wiki/ISO_8601


Getting Started, the RSSGet() function

There are a number of different ways for working with RSS data in python; most of which require you to download and install a module that is not a member of the Standard Modules. Examples are BeautifulSoup and feedparser.

I prefer to not go this route, because it requires an extra step, and the project is not easily transferable to another computer ( modules might not be installed ) .

Instead…. I have written a function called RSSGet() from scratch, that will download the data, parse it, and return it as a usable form. The sample code I am showing latter in the post, will be making uses of this function.

Please feel free to cut/past this into your projects if you wish.


#!/usr/bin/python3

import urllib.request

# Keys generally used in RSS, XML and ATOM
rssKeys = """
author.category.channel.feed.copyright.rights.subtitle.description.
summary.content.generator.guid.id.image.logo.item.entry.lastBuildDate.
updated.link.managingEditor.author.contributor.pubDate.published.title.ttl.
url.image.icon.updated.
"""

# These keys are used by the National Hurican Center
rssKeys = rssKeys + """
nhc:center.nhc:type.nhc:name.nhc:wallet.nhc:atcf.nhc:datetime.
nhc:movement.nhc:pressure.nhc:wind.nhc:headline.
"""

# Transform rssKeys from a string, into a usable list.
rssKeys = rssKeys.replace("\n","").replace(chr(32),"").strip().split(".")



def RSSGet(url):
   """Retrieve and parse an RSS, XML or ATOM feed."""
   html = ""
   stack = []
   core = {}
   prefix = ""
   try:
     with urllib.request.urlopen(url) as response:
        html = response.read().decode().strip()
   except:
     pass
   html = str(html).replace("<" + "br/>", "\n")
   html = str(html).replace("<" + "br />", "\n")
   html = str(html).replace("<" + "![CDATA[", "")
   html = str(html).replace("]]>", "")
   for key in rssKeys:
      html = html.replace( "<"+ key +">", chr(200) + key + chr(200) )
      html = html.replace( "", chr(200) + "/" + key + chr(200) )
   html = html.replace( "<" + "item ", chr(200) + "item" + chr(200) )   
   html = html.split(chr(200))
   for x in range(0,len(html)-1):
     key = html[x]
     value = html[x+1].strip()
     if key == "image": prefix = key + "."
     if key == "/image": prefix = ""
     if key in rssKeys and value!="" and key!="":
        core[prefix + key] = value
     if key=="/item" or key=="/entry" or key=="item" or key=="entry":
        if core!={}:
          stack.append(core)
          core = {}
          prefix = ""
   return stack     



Here is an example of how you might use RSSGet() in your project:


items = RSSGet( 'http://www.cbn.com/cbnnews/us/feed/' )
for item in items:
   print( 'title: ',  item.get('title') )
   print( 'pubDate: ',  item.get('pubDate') )
   print( 'link: ',  item.get('link') )
   print( "" )
   print( item.get('description') )
   print( '==============================================================' )
   print( "" )
  

As you can see in the code example above, an RSS Feed contains one or more items, which you can think of as a record. Each item may contain the information about one News story, blog post, earthquake, etc. It’s one item per event / post.

Each item usually contains the fields title , pubDate, link, and description. There may be many other fields depending on the kind of data the RSS feed carries, but it’s a good bet that these 4 fields will be in EVERY item in an RSS feed.

This is the key to understanding how RSS works. Once you have that, the rest is pretty easy.

The data for each item is stored in a python data type called a dictionary.
You can retrieve the value of any of these fields in the item, using the python dictinary.get() method.
If you try to retrive a value that’s not in the dictionary ( has no key by that name ), like
soemthing = str( item.get(‘BadKeyName’) )
then the string 'None' is returned. This will NOT abend the program.

I know that for some people, what I just said can be confusing.
So.. If your looking for clarification about this, See https://www.w3schools.com/python/ref_dictionary_get.asp.



Filtering the NEWS ( a simple news aggregator )

In today’s crazy world, keeping up with the news can be like drinking from a fire-hose. The sheer number of new stories can be overwhelming. And are you really all that interested in all the fluff pieces and talking heads? Are you really interested in the Green paint shorage in Japan, or in what’s the new cool baby names for 2020?

Wouldn't it be nice to filter the news feeds for stories your interested in?

Doing an internet search for the worlds “News RSS feeds” will come up with a very long list of services. Some are paid subscriptions, and some are FREE services. For the code example bellow, I have selected 3 of the more repeatable services, but you could easily add as many as you would like.

Take a few minutes to consider the following code example. It will scan the 3 RSS news feeds, and print a list of stories that contain one or more of the search terms. A link is also included if you are interested in reading more about the story.

Note that this uses the RSSGet() function that is shown above.



def RSSFilterNews(url, searchTerms):
  for story in  RSSGet(url):  
    bolFlag = False
    text = story.get( 'description','' ).split("&" + "lt;div")
    for search in searchTerms:
       if search.lower() in text[0].lower(): bolFlag = True
    if bolFlag:
      print("***** From: ", url)
      print("-" * 80 )
      print( 'title: ', story.get('title') )
      print( "link: " , story.get( 'link' ) )
      print( story.get( 'pubDate' ))       
      print( "" )
      print( text[0] )
      print( ("=" * 80) + "\n" )


# Words to search for.
searchTerms = ['Seattle', 'baking', 'plywood', 'alligator', 'python']

# CNN Top News Stories.
RSSFilterNews( "http://rss.cnn.com/rss/cnn_topstories.rss", searchTerms )

# New York Times home page.
RSSFilterNews("https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml", searchTerms )

# CBN News Feed.
RSSFilterNews("http://www.cbn.com/cbnnews/us/feed/", searchTerms )


A sample of the output.
***** From: http://rss.cnn.com/rss/cnn_topstories.rss
--------------------------------------------------------------------------------
title: Humpback whale swims free from crocodile-infested river
link: http://rss.cnn.com/~r/rss/cnn_topstories/~3/MNApaCalQxw/humpback-whale-crocodile-river-lon-orig-mrg.cnn
Mon, 21 Sep 2020 11:20:02 GMT

A humpback whale is now free, after being stranded in Australia's East Alligator River for two weeks.
================================================================================

***** From: http://rss.cnn.com/rss/cnn_topstories.rss
--------------------------------------------------------------------------------
title: A Florida woman was attacked by a 10-foot alligator while trimming trees
link: http://rss.cnn.com/~r/rss/cnn_topstories/~3/qllJqmHRqUk/index.html
Sun, 20 Sep 2020 21:12:24 GMT

A Florida woman is recovering from injuries she received when she was attacked by a 10-foot, 4-inch alligator while trimming trees in Fort Myers.
================================================================================


BLOGS

The RSS feeds for blogs work in the exact same way as for news stories (see the section above). You could think of a Blog feed as another News feed, just from a different source.

You could use the same function shown above to filter a blog feed. But it’s more likely that you would want to see a list of the most recent 10 postings ( example bellow ).


def fetchBlogs(url, MaxPosts = 10): 
  """Fetch the most recent posts for given blog url."""
  items = RSSGet( url )
  blogName = items[0].get('title')
  c = len( items )
  if c > MaxPosts: c = MaxPosts
  for x in range( 1, c ):
    item = items[x]
    print('blogName: ' + blogName )
    print('title: ' + item.get('title') )
    print('pubDate: ' + item.get('pubDate') )
    print('link: ' + item.get('link') )
    print( "=" * 80 )


fetchBlogs( "https://www.howtogeek.com/feed/" )
   


A sample of the output.
================================================================================
blogName: How-To Geek
title: Mobile World Congress 2021 Delayed From March to June (for Now)
pubDate: Wed, 23 Sep 2020 12:49:34 +0000
link: https://www.reviewgeek.com/54898/mobile-world-congress-2021-delayed-from-march-to-june-for-now/
================================================================================
blogName: How-To Geek
title: Adobe's AI-Based Liquid Mode Makes PDFs Easy to Read on Smartphones
pubDate: Wed, 23 Sep 2020 12:40:58 +0000
link: https://www.reviewgeek.com/54900/adobes-ai-based-liquid-mode-makes-pdfs-easy-to-read-on-smartphones/
================================================================================
blogName: How-To Geek
title: Google Assistant's New Workday Routine Will Help Keep You on Schedule
pubDate: Wed, 23 Sep 2020 11:59:22 +0000
link: https://www.reviewgeek.com/54857/google-assistants-new-workday-routine-will-help-keep-you-on-schedule/
================================================================================


EARTHQUAKES

The U.S. Geological Survey (USGS) has a great deal of FREE data about earthquakes. Their website is https://www.usgs.gov

They also have many RSS feeds: https://earthquake.usgs.gov/earthquakes/feed/

I am using the one containing data for the past 7 days: https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.atom

Note that these RSS feed are only updated every few hours, so this would not be useful for a real-time quake alarm system. For that, you would need to subscribe to one of the many available push-notification serveries.



# Earthquake Lists.

url = "https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.atom"
records = RSSGet(url)
#print( records )

# Display all quakes over the past 7 days, of a mag 3.0 or greater.
for record in records:
  if record.get('title') > "M 3.0":
     print( record.get('updated' ) + ": " + record.get('title' ) )
print("=" * 80)

# Display all quakes over the past 7 days, in Oklahoma.        
for record in records:
  if "Oklahoma" in record.get('title'):
     print( record.get('updated' ) + ": " + record.get('title' ) )
print("=" * 80)


A sample of the output.
2020-09-15T16:08:40.009Z: M 0.9 - 5 km N of Stroud, Oklahoma
2020-09-15T16:32:30.995Z: M 1.3 - 3 km NW of Garber, Oklahoma
2020-09-15T15:59:46.279Z: M 1.1 - 9 km NW of Perry, Oklahoma
2020-09-14T19:14:37.526Z: M 1.4 - 2 km NW of Lindsay, Oklahoma
2020-09-15T15:48:06.799Z: M 1.7 Quarry Blast - 5 km SW of Salina, Oklahoma
2020-09-15T14:50:46.063Z: M 1.2 Quarry Blast - 6 km WSW of Ralston, Oklahoma
2020-09-15T14:06:35.877Z: M 1.4 - 5 km NW of Ames, Oklahoma
2020-09-15T13:59:47.594Z: M 1.1 - 3 km WNW of Okarche, Oklahoma
2020-09-15T15:03:24.183Z: M 1.1 - 6 km S of Guthrie, Oklahoma
2020-09-15T14:25:29.184Z: M 1.1 - 10 km E of Hennessey, Oklahoma
2020-09-15T14:57:22.569Z: M 1.3 - 2 km S of Tuttle, Oklahoma
2020-09-15T14:51:59.294Z: M 1.1 - 4 km NNW of Okarche, Oklahoma
2020-09-15T14:46:58.496Z: M 0.8 - 4 km S of Avard, Oklahoma
2020-09-15T14:35:56.172Z: M 1.0 - 3 km SSE of Tuttle, Oklahoma


The Volcano Discoveries website also has an RSS feeds with detailed info about recent earthquakes.
I will have more info about the Volcano Discoveries website in the section about Volcanoes (bellow)


# Earthquake Reports

records = RSSGet("https://www.volcanodiscovery.com/earthquake-news.rss")
for record in records:
   print( record.get('title') )
   print( record.get('pubDate') )
   print( "" )
   print( record.get('description') )
   print( ("=" * 80) + "\n" )


A sample of the output.
================================================================================

Damaging earthquake hits Nepal this morning
Wed, 16 Sep 2020 06:37:35 +0000

A moderately strong earthquake hit Nepal this morning at 5:19 am local time (which
is 5 hours 45 minutes ahead of GMT). According to various agencies, the quake had a
magnitude between 4.9 and 6.0 and struck approx. 60 km NE of the capital Kathmandu
in the Sindhupalchok district.
The quake was widely felt within nearby areas and the capital where thousands felt
a short shake, although many...
================================================================================

World Earthquake Report for Tuesday, 15 September 2020
Wed, 16 Sep 2020 00:24:00 +0000

Summary: 2 quakes 6.0+, 31 quakes 4.0+, 87 quakes 3.0+, 190 quakes 2.0+
(310 total) Global seismic activity level on 15 September 2020: HIGH Magnitude
6+: 2 earthquakes Magnitude 4+: 31 earthquakes Magnitude 3+: 87 earthquakes
Magnitude 2+: 190 earthquakes No quakes of magnitude 5 or higherTotal seismic energy
estimate: 3.3 x 1014 joules (91 gigawatt hours, equivalent to 78273 tons of TNT or...
================================================================================



HURRICANES

The NATIONAL HURRICANE CENTER and CENTRAL PACIFIC HURRICANE CENTER have a number or RSS feeds concerning hurricanes.

For a list of their RSS feeds, see https://www.nhc.noaa.gov/aboutrss.shtml

I find that the https://www.nhc.noaa.gov/index-at.xml feed is quite good.

  # Hurican Reports

for record in RSSGet("https://www.nhc.noaa.gov/index-at.xml"):
   headline = record.get( 'nhc:headline' )
   if headline != None:
     print("*** NEWS FLASH from the National Hurican Center ***")
     print( headline )
     print( "" )
   print( "title: ", record.get('title') )
   print( "pubDate: ", record.get('pubDate') )
   if record.get( 'nhc:center' ) != None:
     print( "" ) 
     print( "StormCenter: ", record.get( 'nhc:center' ))
     print( "Heading:     ", record.get( 'nhc:movement' ))
     print( "Pressure:    ", record.get( 'nhc:pressure' ))
     print( "Wind Speed:  ", record.get( 'nhc:wind' ))
   print( "" )
   print( record.get('description') )
   print( "=" * 80 )
   print("")


A sample of the output.
================================================================================

*** NEWS FLASH from the National Hurican Center ***
...OUTER BANDS OF TEDDY SHOWING UP ON BERMUDA RADAR...
...DANGEROUS RIP CURRENTS FORECAST ALONG WESTERN ATLANTIC
BEACHES FOR SEVERAL DAYS...

title: Summary for Hurricane Teddy (AT5/AL202020)
pubDate: Sun, 20 Sep 2020 17:51:30 GMT

StormCenter: 28.6, -63.1
Heading: NW at 9 mph
Pressure: 964 mb
Wind Speed: 105 mph

...OUTER BANDS OF TEDDY SHOWING UP ON BERMUDA RADAR...
...DANGEROUS RIP CURRENTS FORECAST ALONG WESTERN ATLANTIC
BEACHES FOR SEVERAL DAYS...
As of 2:00 PM AST Sun Sep 20
the center of Teddy was located near 28.6, -63.1
with movement NW at 9 mph.
The minimum central pressure was 964 mb
with maximum sustained winds of about 105 mph.
================================================================================


VOLCANOES

The Volcano Discoveries website ( https://www.volcanodiscovery.com ) has a very good RSS feed concerning volcanoes.

They also have a feed with detailed info about earthquakes ( see EARTHQUAKES section above).

Info about their RSS feeds is at https://www.volcanodiscovery.com/rss-feeds.html

# Volcano Reports

records = RSSGet("https://www.volcanodiscovery.com/volcanonews.rss")
for record in records:
   print( record.get('title') )
   print( record.get('pubDate') )
   print( "" )
   print( record.get('description') )
   print( ("=" * 80) + "\n" )


A sample of the output.
================================================================================

Copahue volcano (Chile/Argentina): incandescence visible from crater
Fri, 11 Sep 2020 06:20:59 +0000

The activity of the volcano has remained essentially unchanged characterized by
continuous ash emissions with incandescence visible from crater at night.
Visible glow and near-constant ash emissions could suggest rise of fresh magma.
================================================================================

Volcanic activity worldwide 10 Sep 2020: Semeru volcano, Reventador, Sangay,
Sakurajima, Sabancaya
Thu, 10 Sep 2020 21:00:02 +0000

Sakurajima (Kyushu, Japan): (10 Sep) Visible glow at night, as can be seen in the
image, and near-constant emissions of gas and small amounts of ash suggest
continued rise of fresh magma probably accumulating as a lava dome in the inner
summit crater. The warning bulletin states that ballistic impacts of volcanic
bombs and pyroclastic flows could affect an area of about 2 km distance from the...
================================================================================



WEATHER

There are a number of FREE weather related RSS feeds, but I find that the ones from the BBC are the most useful.

Yes, that IS the BBC; British TV and Radio service, England.

Most other FREE weather related RSS feeds are only focused on their country of origin, with little, if any, info for other countries. The BBC weather feeds simply have the simplest and most updated weather info for any location in the world.

To use this RSS feed:
1. Go to https://www.bbc.com/weather and enter the name of the city near to you, in the space at the top of the web page. I my case, that’s Seattle.
2. Note on the URL line of your browser, a 7 digit code for your city. In my case, it’s 5809844.
3. Using the example code bellow, replace my 7 digit code ( 5809844 ) with the 7 digit code for your city.

Examples:
The code for Seattle is 5809844
The code for New York is 5128581
The code for Chicago (midway airport) is 4887472
The code for Paris is 2988507

# Weather - Current conditions.

url = "https://weather-broker-cdn.api.bbci.co.uk/en/observation/rss/5809844"
for record in RSSGet(url):
  print( record.get("description"))
  print("")

# Weather forecast

url = "https://weather-broker-cdn.api.bbci.co.uk/en/forecast/rss/3day/5809844"
for record in RSSGet(url):
  print( record.get("title"))
  print( record.get("description"))
  print("")


A sample of the output.
Latest observations for Seattle from BBC Weather, including weather,
temperature and wind information

Temperature: 19°C (66°F), Wind Direction: South Easterly, Wind Speed: 4mph,
Humidity: 63%, Pressure: 1019mb, Falling, Visibility: Good

BBC Weather - Forecast for Seattle, US
3-day forecast for Seattle from BBC Weather, including weather,
temperature and wind information

Today: Sunny Intervals, Minimum Temperature: 12°C (54°F) Maximum Temperature: 21°C (71°F)
Maximum Temperature: 21°C (71°F), Minimum Temperature: 12°C (54°F), Wind Direction: South
Westerly, Wind Speed: 6mph, Visibility: Good, Pressure: 1018mb, Humidity: 59%, UV Risk: 2,
Pollution: -- , Sunrise: 06:55 PDT, Sunset: 19:09 PDT

Monday: Light Cloud, Minimum Temperature: 13°C (56°F) Maximum Temperature: 21°C (71°F)
Maximum Temperature: 21°C (71°F), Minimum Temperature: 13°C (56°F), Wind Direction: South
Westerly, Wind Speed: 7mph, Visibility: Good, Pressure: 1014mb, Humidity: 73%, UV Risk: 2,
Pollution: -- , Sunrise: 06:56 PDT, Sunset: 19:07 PDT

Tuesday: Thick Cloud, Minimum Temperature: 14°C (58°F) Maximum Temperature: 21°C (69°F)
Maximum Temperature: 21°C (69°F), Minimum Temperature: 14°C (58°F), Wind Direction: South
Westerly, Wind Speed: 9mph, Visibility: Good, Pressure: 1017mb, Humidity: 70%, UV Risk: 1,
Pollution: -- , Sunrise: 06:57 PDT, Sunset: 19:05 PDT



PODCASTS

And YES, RSS can still be used to download directory listings for podcasts.

The following code example will download the directory listings of 3 popular podcasts, and save the info into a text file named 'podcasts.txt'. The info will include a URL link that can be used to download, or play, the podcast if you choose.

You could add more podcasts to the list by finding the RSS feed of your favorite podcast ( usually a link on the podcast home page ), and adding an additional line of code to this example.

Use your favorite text editor to view the file, and any web browser, or vlc media player, to play the podcast.

In short this is a very primitive, but workable, pod-catcher. It can also be an intresting toy to play with.




def fetchPodcast(url, MaxEpisodes = 10):
  """Fetch the most recent episodes for given podcast."""
  items = RSSGet(url)
  pcName = items[0].get('title') # Get the name of the podcast.
  c = len(items)
  if c > MaxEpisodes: c = MaxEpisodes
  with open('podcasts.txt','a') as output:
    for x in range(1, c ):
       item = items[x]
       guid = str( item.get('guid') )
       if ">" in guid: guid = guid.split(">")[1]
       link = str(item.get('enclosure' ))
       if 'url="' in link: link =  link.replace('url="',"")
       if ".mp3" in link: link = link.split('.mp3')[0] + '.mp3'
       description = str(item.get('description' )).split("<")[0]  
       output.write( 'Podcast: ' + pcName + "\n" )
       output.write( 'title: ' + str(item.get('title' )) + "\n" )
       output.write( 'pubDate: ' + str(item.get('pubDate' )) + "\n" )
       output.write( 'unique ID number: ' + guid + "\n" )
       output.write(  link + "\n" )
       output.write( '\n' + description + "\n" )
       output.write( ( "=" * 80 ) + "\n" )

# If the file already exists, erase it.
with open('podcasts.txt','w') as output: output.write("")

# The Hidden Brain podcast
fetchPodcast( "https://feeds.npr.org/510308/podcast.xml" )

# Fetch the most recent 20 episodes of 'Cut & Paste' podcast.
url = "https://kwmu-rss.streamguys1.com/cut_and_paste/cut-and-paste.xml"
fetchPodcast( url, 20 )

# Fetch the most recent 15 episodes of 'Talk Python to me' podcast.
fetchPodcast( "https://talkpython.fm/episodes/rss", 15 )


A sample of what is written to the file.

================================================================================
Podcast: Talk Python To Me
title: #272 No IoT things in hand? Simulate them with Device Simulator Express
pubDate: Sun, 12 Jul 2020 00:00:00 -0800
unique ID number: 99044d9d-4bcb-458e-8a41-ea7634d8905a
https://talkpython.fm/episodes/download/272/no-iot-things-in-hand-simulate-them-with-device-simulator-express.mp3

Python is one of the primary languages for IoT devices. With runtimes such as CircuitPython and MicroPython, they are ideal for the really small IoT chips.
================================================================================
Podcast: Talk Python To Me
title: #271 Unlock the mysteries of time, Python's datetime that is!
pubDate: Sat, 04 Jul 2020 00:00:00 -0800
unique ID number: 0ec463af-7389-4d43-a908-3861143562c4
https://talkpython.fm/episodes/download/271/unlock-the-mysteries-of-time-pythons-datetime-that-is.mp3

Time is a simple thing, right? And working with it in Python is great. You just import datetime and then (somewhat oddly) use the datetime class from that module.
================================================================================




In concussion.

There are also RSS feeds concerning the current flow of rivers, water level in lakes, air quality, space weather, northern lights, and the list goes on. You only need to find them.

And that concludes what I have to say about python and RSS. I hope that someone out there find’s this useful.

Everyone have a good day, and be kind to each other.

Joe Roten. www.gsw7.net/joe

Last updated: 2020-09-23



Written by Joe Roten

Computer tech, Graphic Artist, Photographer, Writer, Educator, Programmer, Jack of many trades, Social gadfly, and Scholar without portfolio. http://www.gsw7.net/joe/

Written by Joe Roten

http://www.gsw7.net/joe/

As always

The information on my website is FREE.
But donations to help pay for Coffee and Beer are always welcomed.
Thanks.