A Starting Point: A Web App Security Test Plan for End Users

In an earlier post I discussed why we need security standards for education-related web apps.  Today we really don’t have any. Student privacy legislation typically requires “reasonable” security.  (This is due in part to the fact that legislation moves at a slower pace than technology and today’s requirements might not outlast the legislative cycle.) Industry-driven student privacy standards also tend to speak of “reasonable” security with few specifics. TRUST-e’s definition of kids privacy does not require protection of students’ personal information and academic activity, just their credit cards and SSNs.  Nobody has really specified what reasonable security means.   In this post I’ll share a starting point, based on the OWASP Application Security Verification Standard (ASVS) and my own observations of web app security problems.

Securing web applications takes effort, and attack methods grow more sophisticated all the time.  But the blueprint for providing a baseline of strong security is well defined.  The OWASP ASVS spells out a comprehensive list of requirements for designing and verifying a secure web application and defines different levels of verification.  A security standard appropriate for apps collecting students’ personal information and academic activities should incorporate most of the requirements from the ASVS ‘Standard’ level of verification.   Many requirements for the Standard level of verification require access to the inner workings of a web service’s operations.    The test plan I’m presenting here focuses on the ‘Opportunistic’ level of verification that can be observed by end-users of the web application.   My outlook is that the rigor of the practices we *can* observe is an indicator of the practices we *can’t* observe as end users.  So by assessing the health of a service against the observable security practices we can create a yardstick to compare sites and make decisions about whether to use them.

To perform the tests in this plan, no special access is needed beyond an account with the service, and no special equipment is needed beyond a computer and some free software programs.   Every item on the test presents some level of security risk. Many of them are minor but as a whole they paint a picture that’s often more important than the individual weaknesses cataloged in the test.  Having said that, if every education-related web app met this test standard, it would be a big leap forward from where we are today.

It’s my hope that this test plan might be adopted by parents, administrators and teachers as a yardstick or minimum set of requirements for apps that collect our children’s and students’ information.   It’s embedded below for viewing and downloading.

Web-App-Security-Test-Form-Feb15 (downloadable PDF)

VTech vs EDtech

This week we’ve seen news of a major breach of users’ data from an online service run by VTech.  What sets this one apart is that personal information was stolen from hundreds of thousands of children’s accounts, associated with some of the millions of adult accounts that were also compromised.

Troy Hunt has posted a detailed analysis of the breach and other problems with VTech’s web applications.  You can read it here on Troy’s site or here on Ars Technica.  I encourage you to read it.

Here is what Troy Hunt had to say about the severity of the breach: 

“When it’s hundreds of thousands of children including their names, genders and birthdates, that’s off the charts. When it includes their parents as well – along with their home address – and you can link the two and emphatically say “Here is 9 year old Mary, I know where she lives and I have other personally identifiable information about her parents (including their password and security question)”, I start to run out of superlatives to even describe how bad that is.”

When I read this paragraph, head nodding, I thought of the running list I keep of my own kids’ identifiable personal information I’ve been able to gain unauthorized access to through remote attack vulnerabilities in online services used at their schools. (A remote attack is something that does not require access to the user’s network traffic, and can be done from anywhere).

The list is below. I was able to collect all of this by exercising flaws in web pages and interfaces in the education-related services that hold my kids’ information.  It wasn’t all in one place like the VTech information but goes far beyond what was held there.

  • full name
  • gender
  • date of birth
  • in-class behavior records
  • reading level and progress assessments
  • math skill and progress assessments
  • in-class test and quiz scores
  • report cards
  • ability to send private message to a student through an app
  • voice recordings
  • usernames (some with passwords)
  • password hashes
  • school lunch assistance status
  • name and address of school
  • teacher name
  • classmate names (through class rosters)
  • class photos with students labeled by name
  • parent email addresses
  • parent names
  • home address
  • home phone number

My kids are still in elementary school.  Simply by going to school they’ve already had all of this information exposed to the possibility of unauthorized access and collection.

I don’t have knowledge that any of this information has been subject to unauthorized access — but the only difference between a responsible disclosure and a data breach is the ethics of the person who finds the vulnerability.   Most of these vulnerabilities exposed many thousands of students to potential breaches, some of them exposed millions of students to potential breaches of their personal and educational information.

This is a system-wide problem that educators, parents and technology providers must work together to address.  Things are improving but we have a long way to go.  Here are some previous posts on that topic:

Why we need standards: part one of many

A starting point: end-user web app security test plan

Edsurge: Why student data security matters

 

 

Adults want control and limits on collection of their information – students deserve the same

As I read the today’s report from the Pew Research Center on Americans’ Attitudes about Privacy, Security and Surveillance, a few of the survey results jumped out at me:  (The results below are from the report linked above)

  • 93% of adults say that being in control of who can get information about them is important
  • 90% of adults say that controlling what information is collected about them is important
  • 88% say it is important that they not have someone watch or listen to them without their permission
  • Most want limits on the length of time that records of their activity can be retained
  • Americans have little confidence that their data will remain private and secure

Also:

  • 93% say it’s important to be able to share confidential information with another trusted person

Does this sound familiar? The debate about student data revolves around many of the same considerations: We recognize that collection and sharing of information can be beneficial in the academic setting, but are concerned with who can collect what data, what permissions are required, how long can the data be retained, and how will it be kept secure and private.

In this survey, adults gave the loud and clear message that they feel it’s important to have control and limits on the data collected about them.  We can expect that the adults that today’s students become will have similar views about their own information.   As children they have almost no capacity to control or limit the information collected about them, how long it’s kept, or how well it’s protected.

It’s up to us — the adults — to look out for their interests, with the same level of importance we place on looking after our own information.

Guest Blogger: Password Complexity and Cracking

Today we’ll hear from a guest blogger who has studied password cracking and how password complexity affects cracking time.  Katherine, thanks for sharing your research!


My name is Katherine, I am a student in fifth grade, and I am interested in computer security. I recently did an experiment to see if my computer can guess passwords, and I entered the project into my school’s STEM Expo.

It is important to know if your password can be guessed easily, because an insecure password may allow your data to be accessed without your knowledge. Computer crime costs $100 billion in the United States and $500 billion worldwide every year.

When you choose a computer password, the computer scrambles it and stores the resulting “hash” in memory. When you enter your password at a later time, the computer uses your entry to create another hash and compares it to the stored hash. If the hashes match, you have entered the correct password. Many people choose passwords based on common words or names, because they are easy to remember. I predicted that I would be able to use my computer to guess passwords that are based on common words or names.

I used an Apple laptop computer for this experiment, and I created 10 user accounts with different passwords: six using common words or names, two using common words with numbers substituted for letters, and two using a mixture of random characters. I found a list of 5000 commonly used passwords online, and I copied the hashed passwords from the test computer. I then wrote a Python program that creates hashes of the common passwords and compares them to the hashes from the test computer. When a hash from the list of common passwords matches a hash from the test computer, the password has been identified.

I was able to identify six of 10 passwords from the test computer in less than one minute; including “password”, “letmein”, “charlie”, “qwerty”, “trustno1”, and “123456”. I was not able to identify more complex passwords; including “d3bb13”, “Vxjw2!Z”, and “362?tu1Z”. I concluded that it is important to choose a strong password, including a mixture of upper case letters, lower case letters, numbers, and symbols. A computer keyboard has 10 digits (0-9), 26 lower case letters (a-z), 26 upper case letters (A-Z), and 33 symbols; so it is possible to include as many as 95 characters in your password.

It is important to realize that long passwords are more difficult to guess than short passwords. Very fast computers can guess 1 trillion times per second, so it is important to choose a password with many possible combinations in order to make a “brute force” attack, in which all combinations are guessed, more difficult:


4 Character Password:

Numbers Only: 10,000 combinations

Numbers + Lower Case Letters: 1,727,604

Numbers + Lower Case + Upper Case: 15,018,570

Numbers + Lower Case + Upper Case + Symbols: 82,317,120

7 Character Password:

Numbers Only: 10,000,000 combinations

Numbers + Lower Case Letters: 80,603,140,212

Numbers + Lower Case + Upper Case: 3,579,345,993,194

Numbers + Lower Case + Upper Case + Symbols: 70,576,641,626,495

10 Character Password:

Numbers Only: 10,000,000,000 combinations

Numbers + Lower Case Letters: 3,760,620,109,779,060

Numbers + Lower Case + Upper Case: 853,058,371,866,181,866

Numbers + Lower Case + Upper Case + Symbols: 60,510,648,114,517,017,120


In summary, even a very fast computer will take a long time to try all possible combinations of a 10 character password that uses a mixture of 95 possible symbols. Since not everyone chooses a long and complex password, it is possible that someone who is interested in guessing your password will instead try to crack an easier password.

Finally, here is a list of the passwords that were most commonly stolen in 2013:


123456

password

12345678 qwerty
abc123 123456789
111111 1234567
iloveyou adobe123
123123 admin
1234567890 letmein
photoshop 1234
monkey shadow
sunshine 12345
password1 princess
azerty trustno1
000000

While doing this project, I learned a lot about why it is so important to choose a strong password, and I wish you the best of luck with keeping your data safe!

Security Observations of Schoolmessenger’s Secure Document Service

In this post I’ll discuss my observations of the security of Schoolmessenger’s secure document delivery service. Schoolmessenger is a signatory of the Student Privacy Pledge.

[Update: May 5, 2015,  I received word last week of the following security improvements from Schoolmessenger, summarized here and updated inline below.  Thanks to Schoolmessenger for their responsiveness and work toward a more secure service!]

  • Forced HTTPS
  • A return to 256-bit SSL
  • Reduction of Personally Identifiable Information in the notification email and the body of the page

Email TLS is on the roadmap for future work.

[Update: this post was updated on March 20th to add observations of the password-protected document delivery method]

Overview of the service

The Schoolmessenger Secure Document Delivery service can be used by schools to deliver documents to parents.   Parents receive an email with a link to the document, which is served using SSL/HTTPS when the parent clicks the link.  The document can be configured by the school district to be served with or without password protection.  Schoolmessenger’s description of the service can be found at this link.

Summary of observations

The notification email from Schoolmessenger to the parent is sent from Schoolmessenger’s servers without Transport Layer Security (TLS). (Email TLS provides encryption and authentication of email as it traverses the shared network between companies’ mail servers.  I wrote a detailed post on TLS for email last week.).  Though the document itself is not stored in the browser cache, the URL providing access to it is, and the URL is also stored in the browser history. Files sent with passwords reveal the name of the student, school, and contact info of the school administrator who sent the file before checking the password.  If the URL is modified to http instead of https, the document is served without SSL.  Company materials describe the service as using 256-bit SSL but the server hosting the document exclusively uses 128-bit SSL.

I would like to thank and acknowledge the contributions of a colleague who works at a large school district, who collaborated on this analysis.

Notification and response

In early March I requested information about which domain was used to host documents from the Secure Document service and asked for a copy of their Specification Sheet, using the contact form here.  On March 16th I notified the company of the findings included in this report, by mailing it to feedback@schoolmessenger.com, at the advice of a Schoolmessenger call center agent.  I sent a follow up mail on March 18th to the same address.  Schoolmessenger has not responded to my inquiries.

Detailed observations

TLS on outgoing mail 

Email TLS is used to encrypt and authenticate mail as it traverses the shared network between email servers (see my previous post on email TLS for more details).  The Google Safer Email page shows that none of the mail Schoolmessenger sends (via swiftwavenetwork) to gmail recipients uses email TLS.  (Note that Google’s data lags by about 5 days – today’s snapshot shows observations from March 14th.)

Screen Shot 2015-03-20 at 7.37.43 AM

Google’s data is consistent with the headers of the mail I received in my gmail account, which do not indicate that TLS was used.

Screen Shot 2015-03-15 at 8.30.29 PM

URL contains sensitive information and is stored in browser history and cache

The URL for the secure document contains unique keys that identify a document.  In the links I received, there was an 11-character code that appeared to be base64 and a 64 character hexadecimal code.

h t t p s://msg.schoolmessenger.com/m/?s=&mal=

This is counter to OWASP ASVS requirement 9.3

Screen Shot 2015-03-08 at 11.04.55 PM

One reason for this is that the URL can be stored in the browser history, as shown below.  It can be viewed in the history or accessed by using the back arrow.

Screen Shot 2015-03-15 at 8.49.04 PM

The optional password protection largely mitigates this risk.  If it is not enabled by the school district, then the URL in the history gives direct access to the file stored at this URL.  Though this could be stored in server or proxy logs, it is a concern primarily if the service is used from a shared computer where history could be viewed by other users.

The secure document is transferred as the response to a POST request and is not cached by the browser. However the response to the initial URL GET is cached in the browser’s disk cache and results in the URL being stored in the browser’s disk cache.

Screen Shot 2015-03-15 at 8.59.50 PM

Screen Shot 2015-03-15 at 9.41.44 PMAs with history logging, this is mitigated for files that are configured by the school to require passwords for viewing and is primarily a concern only if the document is viewed from a shared computer.

Documents sent with passwords reveal information before the password is entered by the user, with limited controls on brute force password attacks  [Update: this section was added on March 20th, after the original report was posted]

[Update 5/5/2015 – Schoolmessenger has notified me that there is now less PII in notification emails and page text]

If a document is distributed with the optional password enabled, the page served by the link sent to the parent reveals information before the password is entered or verified.  The page includes the full name of the student, the name of the school district, and the name and email address of the school official who sent the document.

Screen Shot 2015-03-20 at 5.44.47 PM

In a quick check of protections against brute force attempts on the document password I observed no rate limiting between requests. After about 1000 attempts the site rejected new attempts for under a minute then started accepting them again.  According to the company’s literature the password can be chosen from fields of the document – this knowledge could be used to constrain the set of passwords used in a brute force attempt.  For example, 1000 attempts is about 3 years’ worth of birthdates.

Site allows the document to be served with http/no encryption if the url is modified to http.

[Update 5/5/2015 – Schoolmessenger has notified me that the site now forces HTTPS]

If the url is modified to use “http” instead of “https”, the document is served using http, without encryption.  The proxy log below shows the PDF file being transferred using http.  Redirecting http requests to https, and the use of HTTP Strict Transport Security (HSTS) would prevent this from happening.

Screen Shot 2015-03-15 at 9.49.18 PMProduct description materials describe 256-bit SSL, servers implement 128-bit SSL.

[Update 5/5/2015 – Schoolmessenger has notified me that the site has returned to 256-bit SSL]

Though this is not a significant security risk, it is worth noting that the SSL implemented by the service does not appear to match the product description on the company website.  One example of this is shown in the screenshot below, from the document at this link.

Screen Shot 2015-03-15 at 9.54.59 PMIn my browser I observed that the document download page was served with 128-bit SSL

Screen Shot 2015-03-15 at 9.57.41 PMQualys SSL Labs TLS check shows that the msg.schoolmessenger.com domain’s SSL is configured to exclusively use 128-bit encryption

Screen Shot 2015-03-15 at 10.10.04 PM

For reference, the summary of the Qualys SSL Labs TLS check is shown below

Screen Shot 2015-03-20 at 7.33.20 AM

IXL Security Observations

In this post I’ll discuss my observations of the security practices of IXL.com.  IXL is a signatory of the Student Privacy Pledge. The Student Privacy Pledge website does not publish the dates that companies signed the pledge but I observed that IXL was added to the Student Privacy Pledge signatory page after February 11th.

Overview of the service

IXL.com is a math and English “adaptive learning” web service that also offers iOS and Android apps. Students play educational ‘games’ and extensive metrics are compiled to assess the students’ level and progress in math and english.

Summary of observations

The web site is served almost exclusively in HTTP. Usernames and passwords are sent with an HTTPS POST but the rest of the traffic is HTTP. Though the login credentials are protected, this still poses a significant risk of session hijack and data snooping attacks which could expose students’ names, affiliations and their assessment metrics as collected by the service. The risk is heightened by the fact that the teacher account has a roster page that displays the names and current passwords of all the students in a given class.  This roster is sent across the network without encryption if a teacher accesses it, and could be also be viewed by an attacker who has hijacked a teacher login session by sniffing the teacher’s authentication cookies off the network.

The mobile apps appear to have only a student-facing functionality. The lack of encryption for all but the the transaction that posts the username and password is consistent with that of the web app.

Authentication controls were in place to prevent remote attacks through enumeration of student or teacher IDs. Authentication tokens were invalidated when a user logged out of the session.

Notification and response

I notified the company of these observations on March 5th.  They got back to me a couple of days ago and have been very responsive since then.  I am including the company’s response to the observations I’d sent, with their permission:

As you noted, HTTPS is currently only used on IXL to protect login credentials and credit card data. Any risk created by using HTTP on the rest of the site is mitigated by the fact that, while technically possible, it is difficult for a determined hacker to “sniff” a user’s cookie value off of a modern network and navigate as if signed in as that user. However, we have been working to move our entire site over to HTTPS to eliminate this vulnerability. Though the vast majority of sites still use HTTP to serve up their content, we want to take every step we can to keep our site and our users secure.

Student passwords are intentionally stored in a reversible format to allow instructors to view them from their rosters. This allows instructors to retrieve them without making changes but, as you mentioned, makes them less secure when transmitted over HTTP. As we move to serve all IXL content over HTTPS, this risk will be eliminated, but it is worth noting that passwords for instructor-created students are the only passwords stored this way. Passwords for teachers, administrators, and parents who purchase their own accounts are all securely hashed when stored.

Thank you for pointing out that user credentials are included in the URL of the POST transaction when signing in on our Android app. Our engineering team has written a fix to remove this vulnerability, and we will patch it in once the update has been thoroughly tested by our Quality Assurance team.

I will add three thoughts of my own:

Sniffing cookies: For a determined hacker, sniffing cookies from a modern network and navigating a site as another user is not difficult at all. In 2011 a firefox plugin called “Firesheep” automated the process.  Today, a free program called “Cookie Cadger” automates the process and can be used with a stock Macbook Pro to execute this attack, called “Session Hijacking”.  In 2013 I worked on a story with Dana Liebelson of Mother Jones describing the problem and the short embedded video below, from that article, shows a demonstration.

The vast majority of sites still use HTTP to serve content:  Certainly true but it’s a flawed comparison.  The vast majority of sites don’t have usernames and passwords to create authenticated sessions, and don’t collect personal information from students.  The correct general use comparison here is social media or web-email sites, which typically do use encryption (well, since Firesheep anyway).  The OWASP Transport Security Layer (TLS) cheat sheet includes a rule to use TLS (another name for SSL) for all login pages and all authenticated pages.

Plaintext passwords:   Storing passwords in reversible formats is clearly counter to best practices for general use and transmitting them unencrypted makes it worse.  But, coupled with SSL perhaps it could be considered as a defined exception for classroom apps, so long as SSL+(HSTS required and secure flag if load balancers allow it) was there, and if personal information collected by the service was restricted to a less-sensitive subset. There is a practical reason behind it in this case. This is the sort of thing that might be in an edtech-specific security standard, if there was consensus that it is acceptable practice in defined circumstances.

Testing notes

I have access to my child’s paid student account, and also set up demo teacher and student accounts. Screen shots in this document are from the demo teacher and student accounts but all of the student account observations have been confirmed in the paid student account.

Detailed Observations

Sample reports
Screen shots of student assessment reports are shown below. In my demo accounts the data is sparse but the categories of data can be seen in these captures.

Student account

Screen Shot 2015-03-03 at 12.42.16 AM

Screen Shot 2015-03-03 at 12.43.39 AM

Teacher Account

Screen Shot 2015-03-03 at 12.50.30 AM

Screen Shot 2015-03-03 at 12.52.02 AM

Lack of encryption
The login form is served with HTTP, as are authenticated sessions. The username and password are sent with an HTTPS POST transaction. Though the login credentials are protected at login, this still poses a significant risk of session hijack and data snooping attacks which could expose students’ names, affiliations and their assessment metrics collected and compiled by the service.  (See my previous post on session cookies to read more about the risk of session hijacks posed by transmitting session cookies without encryption.)

Screen shots below show the login form and authenticated session, with URL indicating HTTP.  The session shown is a student account, but the teacher account I set up had the same behavior, as I will show later in the post.

Screen Shot 2015-03-03 at 12.54.15 AM

Screen Shot 2015-03-03 at 12.56.24 AM

A sample of the transactions is shown below. In the lower pane, the HTTPS POST of credentials and some HTTP responses from ixl.com are shown. The transaction selected in the proxy shows an HTTP GET, with user cookies sent without encryption.

Screen Shot 2015-03-03 at 1.06.57 AM

Transmission of plain text passwords without encryption
From a teacher account, a the ‘roster’ page loads a class roster and displays each student’s current password in plain text. This reveals that the passwords are not hashed at the server, and since the transfer is HTTP, it poses a risk that an attacker could obtain these passwords through direct observation of traffic or session hijack of a teacher account.  An additional observation is that encrypting usernames and passwords when users log in shows a recognition by the company that this information should be protected, but sending entire class rosters with plaintext passwords is a much larger risk than exposing a single user’s credentials.

The screen shots below show the class roster page in the browser, and the class roster data in the proxy

Screen Shot 2015-03-03 at 1.17.06 AM

Screen Shot 2015-03-03 at 1.21.27 AM

Notice also that the usernames contain the full first and last names of the students.  These are assigned by the service when the classroom is set up so there is no option for the teacher to make usernames that don’t include the students’ full names.

Mobile app transmits username and password in URL
When a user logs in from the mobile app, the username and password are included in the URL of the POST transaction used to authenticate the session. Though this information is encrypted in transit, best practice is to send all sensitive data in the HTTP message body, not in the URL.

This is documented in OWASP Application Security Verification Standard requirement 9.3, which is part of the lowest specified set of verification requirements, (Opportunistic).

Screen Shot 2015-03-08 at 11.04.55 PM

The OWASP Application Security FAQ has more information.  Consider that since the student usernames contain their full first and last names, logging as described below would capture the usernames and passwords, and the full names of the students.

Screen Shot 2015-03-12 at 10.06.42 AM

The proxy log below shows the HTTPS POST with credentials in URL, and that transactions after authentication are sent with HTTP, in a manner similar to the behavior of the web app.

Screen Shot 2015-03-03 at 1.37.59 AM

(Updated after posting to reorder the sections – no content changed)

Email Transport Layer Security, Educational Sites and Apps, and Schools

This post will focus on Email Transport Layer Security. This is often just called TLS and sometimes ‘starttls’.  As the name implies, TLS protects the contents of traffic as it travels between endpoints on the internet by encrypting the content and by verifying the authenticity of the servers at the endpoints.   The SSL/HTTPS used by browsers is a form of TLS that protects traffic flowing between browsers and web servers.

Email TLS protects the contents of emails as they travel between mail servers.  For example if an educational service (let’s call it SuperDuperReaders.com) sends a class update to a teacher’s Google for Education email account, TLS can be used to encrypt the email as it travels from SuperDuperReaders.com’s email server to Google’s email server.  (The Gmail interface will handle encryption between Google’s servers and the teachers email client).  Google’s Safer Email site has more information about how this works, and I’ll refer to this site again later in the post.

This is back-end stuff and a remote threat compared to sending the same content from a mail server to a user’s email client without encryption — for example if the user is using an airport wifi to read the email and it’s not encrypted.   However, maintaining a comprehensive security program means being mindful of potential risks to the security of information and systematically taking steps to protect against those risks.  Another thing to consider is that sometimes a number of small vulnerabilities can be leveraged to create more powerful exploits.  Furthermore I am not aware of a convincing reason for not supporting TLS for email/starttls.   Security is difficult. It’s important to enable the “easy” protections when they are available.

One more detail of email TLS is that both ends of the connection must support it, and it is up to the server sending the mail to request that it be used for the transfer of information.  (This is where ‘starttls’ comes from, it’s the name of the request for TLS from the server sending the mail).  From the previous example, if SuperDuperReaders.com wanted to send email to a teacher’s Google Apps for Education email account using TLS, SuperDuperReaders would initiate the connection with Google and ask for TLS with a starttls request. Google’s server would reply that it’s supported by their server and the connection encryption would be set up.

Returning to Google’s Safer Email site:  At this site Google posts daily percentages of mail sent and received using TLS for the domains with the most traffic to and from Google’s mail servers.  For mail coming in to Google, the mail will only be sent with TLS if the other site (SuperDuperReaders for example) requests it.  Google always requests TLS for email connections, so for mail leaving Google it will only be sent with TLS if the receiving domain supports TLS.  The Safer Email website allows searches by domain name, and the data set can be downloaded from the site.

As an example, the screenshot below shows the results for edmodo.com from today’s dataset.  Google reports that 0% of the mail entering Google from Edmodo is sent with TLS. This means that Edmodo’s servers are not requesting TLS when sending email.  Outbound mail is not listed because not enough mail flows from Google to Edmodo to be recorded in this dataset.

(UPDATED: On March 11th Edmodo enabled STARTTLS for outgoing email, and this should show up in the Google Safer Email results by March 16th or 17th, as they lag by 5 days or so – thanks to Edmodo for quick response!)

Screen Shot 2015-03-10 at 3.39.24 PM

I reported this to Edmodo in June, 2014 and at that time I generated a mail from Edmodo by entering a post in a test classroom, and examined the email headers to confirm that TLS was not used during transmission from Edmodo to my email account. The text of the email is shown below.

Screen Shot 2015-03-10 at 3.50.34 PM

Though the message text is admittedly somewhat contrived, it does demonstrate a case where communications sent with an expectation of privacy, containing personal information of students, was sent across a portion of the network without encryption.  (Not real students in this example.)  Emails from other services might contain more sensitive information than what is in this example.

Google’s site is a way for end users to understand what companies are doing with email TLS without having to decipher email headers.  It can also be used to check what school districts are doing if they send enough mail to be included in Google’s dataset.  For example a data set downloaded today contained the domain names of some school districts and educational organizations that appear to be sending email without requesting TLS from the receiving side. These include.

browardschools.com, cherrycreekschools.org, fultonschools.org, charterschoolsusa.com

A search on domains with ‘k12’ that have 0% incoming TLS yields this:

beaumont.k12.tx.us, bluevalleyk12.org, cms.k12.nc. usmcms.k12.nc.us, cobbk12.org, dcsdk12.org, dpsk12.org, elkriver.k12.mn.us, forsyth.k12.ga.us, greenville.k12.sc.us, henrico.k12.va.us, jeffco.k12.co.us, mason.k12.oh.us, sdhc.k12.fl.us, sheboygan.k12.wi.us, worthington.k12.oh.us

Note that Google’s Safer Email data set changes day to day based on the amount of email traffic seen by google per-domain.

For those inclined, here is how to check if email in a gmail inbox was sent to Gmail with TLS:

Select the “show original” menu item for the mail you want to check.

Screen Shot 2015-03-10 at 4.54.49 PM

This will bring up a view of the email header.  Look for “TLS” in the text.

For example the here is part of the headers for the mail above, confirming it was sent from Yahoo to Google using TLS.

Screen Shot 2015-03-10 at 4.51.17 PM

For completeness, here is the similar portion of headers for the mail from Edmodo in June (shown earlier in the post), sent without TLS from Edmodo to Google.

Screen Shot 2015-03-10 at 4.46.13 PM

(updated after posting to add k12 domains to the list of school district domains with 0% TLS)

Infosnap Security Observations: (Updated 6/7/15 to reflect many security improvements)

In this post I’ll be reviewing the security practices of Infosnap.  Infosnap is a signatory of the Student Privacy Pledge.

UPDATED: June 7, 2015: Infosnap has made numerous improvements to the service’s security, as described below. Over the past few weeks I have had a very open and collaborative interaction with Infosnap to discuss the many improvements, and verify them from my side.  Many thanks to everyone at Infosnap for making these changes and for enabling the open communications about them.

The changes are listed here and also updated inline. I have independently verified all of these.

Application Software

• Changes were made to the user login process to provide identical responses to invalid username and password combinations. This was done to make it more difficult to test potential usernames for validity.

• Changes were made to the user login process to obfuscate the email address of the account holder. This was done to reduce the sensitive information displayed to the user.

Network

• Changes were made to the session authentication token to set the “httpOnly” flag. This was done to prevent an XSS (Cross-Site Scripting) exploit from gaining access to a session cookie.

• Changes were made to the session authentication token to set the “secure” flag. This protects against accidental transmission of the session cookie across an unencrypted link.

• Changes were made to invalidate the session authentication token at logout. This was done to prevent the unauthorized reuse of a session cookie.

• Changes were made to examine headers for the X-Frame-Option. This was done to prevent frame sniffing attacks.

• Changes were made to reduce server details included in response headers. This was done to reduce the amount of internal system information provided.

Snapcodes

• A change was made to the recommended procedure for utilizing Snapcodes. The standard procedure is to require both Snapcode and Date of Birth verification for authorization.

• Changes were made to the Snapcode authentication process to limit the number of times a user can provide an incorrect Date of Birth before the Snapcode is invalidated. This was done to make brute force attacks more difficult.

Notification and response

Update: In May 2015 Infosnap notified me of many security enhancements as described in the updates to this post.

I first reported these observations in this post to Infosnap in August, 2013.  Infosnap responded that the issues could not be addressed at that time without causing significant impact and potential usability issues to its user base.  I reported them again in August 2014 and received a similar response.  The Student Privacy Pledge site does not show the dates that companies signed but the Wayback Machine’s Jan 20th snapshot shows that Infosnap had signed by then.  In March 2015 I reconfirmed most of the observations outlined below.  Since March Infosnap has made numerous security improvements as described in the updates to the post, and as captured in the updated Web App Security Test Plan results included at the end of the post.

Overview of the service

Infosnap is a school registration service.  Schools pre-load some information and Infosnap emails an access code, called a “snapcode” to parents by email.  Parents use this snapcode to access the student’s account by associating it with an Infosnap account, and complete the registration.  An excerpt of an email sent to a parent is shown below.

Screen Shot 2015-03-08 at 12.44.02 AM

The school preloads information including a student’s name, address, and date of birth, the student’s parents and their info from its records.  When parents access the record they can update this information,  add student medical information, and select directory sharing preferences .  The screenshot below shows some of the personal information that is preloaded to the service.

Screen Shot 2015-03-08 at 10.31.44 PM

The next screenshot shows the information about medical conditions that is collected by the service.

Screen Shot 2015-03-08 at 12.55.16 AM

Security observations

The first person to present a snapcode gets access to the records, with no double checks against email addresses known to the school

Update 6/7/15: This is mitigated by the changes to make birthdate verification the default, and to lock out a snapcode after 4 incorrect birthdates are presented by the user  

When a snapcode is presented, the user logs in (if a returning user) or creates a new account, and the student record is associated with that account. The account usernames are email addresses, and there is no verification that the student record is being accessed by the owner of an email account known to the school for that student.  This means that anyone who obtains a snapcode can access the student record if it is presented before a parent uses it to register the student.  There is a feature that the school can configure to require the student’s birthday before gaining access, but I observed that it is not rate limited and can be quickly bypassed with a script or “fuzz tool” that automatically cycles through possible birthdates.

Snapcodes are sent in email and included in URLs

As shown above, snapcodes are sent to parents in emails.  Handling of these codes is very similar to handling of passwords, as both can be used to gain access to information protected by the code or password.   Sending passwords in emails is documented to be counter to best practices, and snapcodes should be treated with similar care.  This is discussed in the OWASP Application Security FAQs, which states: “Emailing the actual password in clear text can be risky as an attacker can obtain it by sniffing. Also the mail containing the password might have a long life time and could be viewed by an attacker while it is lying in the mailbox of the user. Apart form the above threats, a malicious user can do shoulder-surfing to view the password or login credentials.”

The embedded link in the email contains the student’s snapcode in the URL.  This is also against best practices because the URL may be logged in browser history, or server logs.  This requirement is part of the OWASP Application Security Verification Standard and the text of the requirement is shown below.

Screen Shot 2015-03-08 at 11.04.55 PM

Valid snapcodes can be used to extract student and parent information without logging in to an account

Update: 6/7/15: changes have been made to obfuscate the parent’s full email address on all interfaces

If a parent has already registered and “claimed” the snapcode, an attacker in possession of the snapcode could still extract considerable information by attempting to attach the snapcode to another existing account controlled by the attacker. This information includes the name of the school district (served in a banner at the top of the page), the child’s first name and birthdate, the parents email address, first name and last initial.

After entering the snapcode to the account, the service asks for the student’s DOB as an additional check (optional and controlled by the school).  The dialog that asks for this includes the student’s first name.

Screen Shot 2015-03-08 at 11.19.28 PM

If the birthday presented is incorrect, the user is prompted to try again. I observed there were no rate limits or attempt limits on this check.  As described above, an attacker could use an automated tool to quickly bypass this check, and in the process discover the student’s birthdate.

Once past the birthdate check, the potential attacker would be presented with a page that reveals the parent’s first name and last initial. The email address is obscured but can be obtained in another way that I’ll show later.

Screen Shot 2015-03-08 at 11.23.58 PM

If a valid snapcode is presented that has already been “claimed” and a potential attacker selects the option to create a new account, the parent’s full email address is revealed.

Screen Shot 2015-03-08 at 11.29.00 PM

Consider that among other things, an attacker could construct a phishing attack using all of this information (school/district name and logo, student and parent first name, parent last initial, student birthdate, parent email address)  to make the malicious email or web link look legitimate.

Authentication token and session handling

Update 6/7/15: The session cookie’s httpOnly and Secure flags are now set, and session cookies are invalidated when a user logs out.

Note: this is more technical than the above sections, but important. An attacker in possession of a user’s session cookies can potentially control the user’s account and view all the information it holds.  Broken session management and authentication controls are #2 in the latest OWASP Top 10 list of web app vulnerabilities.  Refer to my earlier post on Session Cookies for more information.

Several best practices are not followed with respect to protecting session authentication tokens, which are codes that give a user access to an account after presenting username and password.  Problems include:

  • Not invalidating a session authentication token when a user logs out
    • This means that if an attacker obtains the token, the user does not revoke the access by logging out
  • Not setting the ‘secure’ flag in session authentication token cookies or using strict transport security. These protect against accidental transmission of session authentication tokens over unencrypted links
    • secure flag prevents the cookie from being sent over an unencrypted link.
      • it’s not uncommon to see this flag not set due to requirements of load balancers in the infrastructure, but then strict-transport-security must be used, and it is not.
    • strict-transport-security tells a browser to always connect to a site over an encrypted link even if the url is presented as http
  • Not setting the httpOnly flag in the session authentication token cookies, which could allow them to be read by a malicious script and sent to an attacker.

The image below shows that the httpOnly and secure flag are not set on the FamilyPortalAuthentication cookie. This is the one that grants access to a user’s authenticated session.

Screen Shot 2015-03-09 at 12.02.53 AM

Updated: Full end-user test plan results

Full results of end-user security testing, repeated in June 2015 to capture numerous recent security improvements, are in the embedded report.  (The original results from March follow below this latest version.)

June 2015 results:

March 2015 results: