Last updated: 2020-11-21

Checks: 7 0

Knit directory: r4ds_book/

This reproducible R Markdown analysis was created with workflowr (version 1.6.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(20200814) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version 6e7b3db. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .Rproj.user/

Untracked files:
    Untracked:  analysis/images/
    Untracked:  code_snipp.txt
    Untracked:  data/at_health_facilities.csv
    Untracked:  data/infant_hiv.csv
    Untracked:  data/measurements.csv
    Untracked:  data/person.csv
    Untracked:  data/ranking.csv
    Untracked:  data/visited.csv

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the repository in which changes were made to the R Markdown (analysis/ch8_import_data.Rmd) and HTML (docs/ch8_import_data.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

File Version Author Date Message
html 7ed0458 sciencificity 2020-11-10 Build site.
html 86457fa sciencificity 2020-11-10 Build site.
html 4879249 sciencificity 2020-11-09 Build site.
html e423967 sciencificity 2020-11-08 Build site.
html 0d223fb sciencificity 2020-11-08 Build site.
html ecd1d8e sciencificity 2020-11-07 Build site.
html 274005c sciencificity 2020-11-06 Build site.
html 60e7ce2 sciencificity 2020-11-02 Build site.
html db5a796 sciencificity 2020-11-01 Build site.
html d8813e9 sciencificity 2020-11-01 Build site.
html bf15f3b sciencificity 2020-11-01 Build site.
html 0aef1b0 sciencificity 2020-10-31 Build site.
html bdc0881 sciencificity 2020-10-26 Build site.
html 8224544 sciencificity 2020-10-26 Build site.
html 2f8dcc0 sciencificity 2020-10-25 Build site.
html 61e2324 sciencificity 2020-10-25 Build site.
html 570c0bb sciencificity 2020-10-22 Build site.
Rmd 7c43632 sciencificity 2020-10-22 Completed Chapter 8
html cfbefe6 sciencificity 2020-10-21 Build site.
Rmd 8e445e8 sciencificity 2020-10-21 updated the workflow-projects section
html 4497db4 sciencificity 2020-10-18 Build site.
Rmd f81b11a sciencificity 2020-10-18 added Chapter 14 and some of Chapter 8

options(scipen=10000)
library(tidyverse)
library(flair)
library(emo)
library(lubridate)
library(magrittr)

Inline csv file

read_csv("a,b,c
1,2,3
4,5,6")
# A tibble: 2 x 3
      a     b     c
  <dbl> <dbl> <dbl>
1     1     2     3
2     4     5     6

Skip some columns

  • metadata
  • commented lines that you don’t want to read
read_csv("The first line of metadata
  The second line of metadata
  x,y,z
  1,2,3", skip = 2)
# A tibble: 1 x 3
      x     y     z
  <dbl> <dbl> <dbl>
1     1     2     3
read_csv("# A comment I want to skip
  x,y,z
  1,2,3", comment = "#")
# A tibble: 1 x 3
      x     y     z
  <dbl> <dbl> <dbl>
1     1     2     3

No column names in data

read_csv("1,2,3\n4,5,6", 
         # \n adds a new line 
         col_names = FALSE
         # cols will be labelled seq from X1 .. Xn
         ) 
# A tibble: 2 x 3
     X1    X2    X3
  <dbl> <dbl> <dbl>
1     1     2     3
2     4     5     6
read_csv("1,2,3\n4,5,6", 
         # cols named as you provided here
         col_names = c("x", "y", "z")) 
# A tibble: 2 x 3
      x     y     z
  <dbl> <dbl> <dbl>
1     1     2     3
2     4     5     6

NA values

read_csv("a,b,c,d\nnull,1,2,.", 
         # here we specify that the . and null
         # must be considered to be missing values
         na = c(".",
                "null"))
# A tibble: 1 x 4
  a         b     c d    
  <lgl> <dbl> <dbl> <lgl>
1 NA        1     2 NA   

Exercises

  1. What function would you use to read a file where fields were separated with
    “|”?

    read_delim()

    # from the ?read_delim help page
    read_delim("a|b\n1.0|2.0", delim = "|")
    # A tibble: 1 x 2
          a     b
      <dbl> <dbl>
    1     1     2
  2. Apart from file, skip, and comment, what other arguments do read_csv() and read_tsv() have in common?

    All columns are common across the functions.

    • col_names
    • col_types
    • locale
    • na
    • quoted_na
    • quote
    • trim_ws
    • n_max
    • guess_max
    • progress
    • skip_empty_rows
  3. What are the most important arguments to read_fwf()?

    • file to read
    • col_positions as created by fwf_empty(), fwf_widths() or fwf_positions() which tells the function where a column starts and ends.
  4. Sometimes strings in a CSV file contain commas. To prevent them from causing problems they need to be surrounded by a quoting character, like " or '. By default, read_csv() assumes that the quoting character will be ". What argument to read_csv() do you need to specify to read the following text into a data frame?

    "x,y\n1,'a,b'"

    Specify the quote argument.

    read_csv("x,y\n1,'a,b'", 
             quote = "'")
    # A tibble: 1 x 2
          x y    
      <dbl> <chr>
    1     1 a,b  
  5. Identify what is wrong with each of the following inline CSV files. What happens when you run the code?

    read_csv("a,b\n1,2,3\n4,5,6") 
    read_csv("a,b,c\n1,2\n1,2,3,4")
    read_csv("a,b\n\"1")
    read_csv("a,b\n1,2\na,b")
    read_csv("a;b\n1;3")

    read_csv(“a,b\n1,2,3\n4,5,6”)
    only 2 cols specified but 3 values provided

    read_csv(“a,b,c\n1,2\n1,2,3,4”)
    3 col names provided, but either too few, or too many column values provided

    read_csv(“a,b\n"1”)
    2 col names provided, but only one value provided.
    closing " missing

    read_csv(“a,b\n1,2\na,b”) Nothing syntactically a problem, but the rows are filled
    with the column headings?

    read_csv(“a;b\n1;3”) the read_csv2 which reads ; as delimiters should have been used

    They all run, but most have warnings, and some are not imported as expected.

    read_csv("a,b\n1,2,3\n4,5,6") 
    Warning: 2 parsing failures.
    row col  expected    actual         file
      1  -- 2 columns 3 columns literal data
      2  -- 2 columns 3 columns literal data
    # A tibble: 2 x 2
          a     b
      <dbl> <dbl>
    1     1     2
    2     4     5
    read_csv("a,b,c\n1,2\n1,2,3,4")
    Warning: 2 parsing failures.
    row col  expected    actual         file
      1  -- 3 columns 2 columns literal data
      2  -- 3 columns 4 columns literal data
    # A tibble: 2 x 3
          a     b     c
      <dbl> <dbl> <dbl>
    1     1     2    NA
    2     1     2     3
    read_csv("a,b\n\"1")
    Warning: 2 parsing failures.
    row col                     expected    actual         file
      1  a  closing quote at end of file           literal data
      1  -- 2 columns                    1 columns literal data
    # A tibble: 1 x 2
          a b    
      <dbl> <chr>
    1     1 <NA> 
    read_csv("a,b\n1,2\na,b")
    # A tibble: 2 x 2
      a     b    
      <chr> <chr>
    1 1     2    
    2 a     b    
    read_csv("a;b\n1;3")
    # A tibble: 1 x 1
      `a;b`
      <chr>
    1 1;3  

Parsing

str(parse_logical(c("TRUE", "FALSE", "NA")))

 logi [1:3] TRUE FALSE NA
str(parse_integer(c("1", "2", "3")))

 int [1:3] 1 2 3
str(parse_date(c("2010-01-01", "1979-10-14")))

 Date[1:2], format: "2010-01-01" "1979-10-14"

All parse_xxx() variants provide a uniform specification to use.

parse_x(character_vector_to_parse, na = c(x, y))

parse_integer(c("1", "231", ".", "456"), na = ".")
[1]   1 231  NA 456
(x <- parse_integer(c("123", "345", "abc", "123.45")))
Warning: 2 parsing failures.
row col               expected actual
  3  -- an integer             abc   
  4  -- no trailing characters 123.45
[1] 123 345  NA  NA
attr(,"problems")
# A tibble: 2 x 4
    row   col expected               actual
  <int> <int> <chr>                  <chr> 
1     3    NA an integer             abc   
2     4    NA no trailing characters 123.45

To detect problems use problems().

problems(x)

# A tibble: 2 x 4
    row   col expected               actual
  <int> <int> <chr>                  <chr> 
1     3    NA an integer             abc   
2     4    NA no trailing characters 123.45

Sometimes depending on where in the world you are, you will have different conventions when it comes to numbers.

For example you may separate the integer part from the decimal part by using a . or a ,. To tell the parsing function what kind of data you’re expecting to be in a vector use locale = locale(...) in your parsing function.

parse_double("1.23")

[1] 1.23
parse_double("1,23", locale = locale(decimal_mark = ","))

[1] 1.23
parse_number("$100")
[1] 100
parse_number("20%")
[1] 20
parse_number("It cost $123.45")
[1] 123.45
# Used in America
parse_number("$123,456,789")
[1] 123456789
# Used in many parts of Europe
parse_number("123.456.789", 
             locale = locale(grouping_mark = "."))
[1] 123456789
# Used in Switzerland
parse_number("123'456'789", 
             locale = locale(grouping_mark = "'"))
[1] 123456789
charToRaw("Hadley")
[1] 48 61 64 6c 65 79
charToRaw("Vebash")
[1] 56 65 62 61 73 68
(x1 <- "El Ni\xf1o was particularly bad this year")
[1] "El Niño was particularly bad this year"
(x2 <- "\x82\xb1\x82\xf1\x82\xc9\x82\xbf\x82\xcd")
[1] "‚±‚ñ‚É‚¿‚Í"

To fix the problem you need to specify the encoding in parse_character():

parse_character(x1, 
                locale = locale(encoding = "Latin1"))

[1] "El Niño was particularly bad this year"
parse_character(x2, 
                locale = locale(encoding = "Shift-JIS"))

[1] "<U+3053><U+3093><U+306B><U+3061><U+306F>"

You can try the guess_encoding() to help you out:

guess_encoding(charToRaw(x1))
# A tibble: 2 x 2
  encoding   confidence
  <chr>           <dbl>
1 ISO-8859-1       0.46
2 ISO-8859-9       0.23
guess_encoding(charToRaw(x2))
# A tibble: 1 x 2
  encoding confidence
  <chr>         <dbl>
1 KOI8-R         0.42
fruit <- c("apple", "banana")
parse_factor(c("apple", "banana", "bananana"), 
             levels = fruit)
[1] apple  banana <NA>  
attr(,"problems")
# A tibble: 1 x 4
    row   col expected           actual  
  <int> <int> <chr>              <chr>   
1     3    NA value in level set bananana
Levels: apple banana
parse_datetime("2010-10-01T2010")
[1] "2010-10-01 20:10:00 UTC"
# If time is omitted, it will be set to midnight
parse_datetime("20101010")
[1] "2010-10-10 UTC"
parse_date("2010-10-01")
[1] "2010-10-01"
library(hms)
parse_time("01:10 am")
01:10:00
parse_time("20:10:01")
20:10:01
parse_date("01/02/15", "%m/%d/%y")
[1] "2015-01-02"
parse_date("01/02/15", "%d/%m/%y")
[1] "2015-02-01"
parse_date("01/02/15", "%y/%m/%d")
[1] "2001-02-15"
parse_date("1 janvier 2015", "%d %B %Y", 
           locale = locale("fr"))
[1] "2015-01-01"

Exercises

  1. What are the most important arguments to locale()?

    • The date_names, for example above we specified “fr” for French date names in order to parse the date.
    • decimal_mark: Due to differences in locale, you should set this if your decimal numbers are separated using something other than ..
    • grouping_mark: The default is “,” since that is what is used in the US, but if your grouping_mark is different, you should set this argument for your analysis.
    • tz: The default tz is UTC, but you may want to change it to your timezone.
  2. What happens if you try and set decimal_mark and grouping_mark to the same character? What happens to the default value of grouping_mark when you set decimal_mark to “,”? What happens to the default value of decimal_mark when you set the grouping_mark to “.”?

    • What happens if you try and set decimal_mark and grouping_mark to the same character?

      # decimal_mark` and `grouping_mark` 
      # to the same character
      parse_double("123,456,78", 
                   locale = locale(decimal_mark = ",",
                                   grouping_mark = ","))
      parse_number("123,456,78", 
                   locale = locale(decimal_mark = ",",
                                   grouping_mark = ","))

      We get # Error: decimal_mark and grouping_mark must be different when using both parse_number and parse_double.

    • What happens to the default value of grouping_mark when you set decimal_mark to “,”?

      parse_number("123 456,78", locale = 
                     locale(decimal_mark = ","))
      [1] 123
      # when both are specified, 
      # number parsed correctly
      parse_number("123 456,78", locale = 
                     locale(decimal_mark = ",",
                            grouping_mark = " "))
      [1] 123456.8
      # even though no grouping_mark specified, 
      # parse_number handles the grouping mark 
      # of . well.
      # Below the reason is revealed to be the default
      # switching in the readr code
      parse_number("123.456,78", 
                   locale = locale(decimal_mark = ","))
      [1] 123456.8
      # preserve the decimals
      print(parse_number("123.456,78", 
              locale = locale(decimal_mark = ",")),
              digits=10)
      [1] 123456.78
      print(parse_number("123456789,11", 
              locale = locale(decimal_mark = ",")),
              digits=15)
      [1] 123456789.11

      In parse_number no grouping preserved, whitespace is not considered as group so we get an incorrect parsing. Initially I thought that since we overrode decimal_mark to be equal to the grouping_mark default, this removes the default, and hence has to be supplied for correct parsing if you also have a specific grouping character present.

      Turns out the code sets the decimal_mark if it was not set, or vice versa, but it can’t be exhaustive and hence chooses to toggle between the . and ,.

      Defaults are that decimal_mark = . and grouping_mark = ,. If you only change the decimal_mark
      to be ,, and don’t specify any grouping mark it is defaulted to be .

      But in this case we’ve got a grouping_mark that is a space, and have not specified that, hence the parsing does not occur well.

      My take away here is even knowing that the code defaults the grouping_mark to be . if you set the decimal_mark to be , we should always be vigilant and override both to ensure parsing happens as expected.

      From readr code file locale.R.

        if (missing(grouping_mark) && 
            !missing(decimal_mark)) {
          grouping_mark <- if (decimal_mark == ".") "," 
                           else "."
        } else if (missing(decimal_mark) && 
                   !missing(grouping_mark)) {
          decimal_mark <- if (grouping_mark == ".") "," 
                          else "."
        }

      For parse_double() we experience different results.

      parse_double("123456,78", 
                   locale = locale(decimal_mark = ","))
      [1] 123456.8
      parse_double("123.456,78", 
                   locale = locale(decimal_mark = ","))
      [1] NA
      attr(,"problems")
      # A tibble: 1 x 4
          row   col expected               actual    
        <int> <int> <chr>                  <chr>     
      1     1    NA no trailing characters 123.456,78
      parse_double("123.456,78", 
                   locale = locale(decimal_mark = ",",
                                   grouping_mark = "."))
      [1] NA
      attr(,"problems")
      # A tibble: 1 x 4
          row   col expected               actual    
        <int> <int> <chr>                  <chr>     
      1     1    NA no trailing characters 123.456,78
      parse_double("123 456,78", 
                   locale = locale(decimal_mark = ",",
                                   grouping_mark = " "))
      [1] NA
      attr(,"problems")
      # A tibble: 1 x 4
          row   col expected               actual    
        <int> <int> <chr>                  <chr>     
      1     1    NA no trailing characters 123 456,78

      Hhmm okay, so it seems like parse_double() is a bit more strict, and does not seem to like it even if we override the locale(). This Stack Overflow post confirms what we see here, so too does this post and this one. The only perplexing thing is that when I do set the grouping_mark in locale() why is this not considered? Because parse_double() also has a default locale which may be overriden by locale()? 😕

      What happens to the default value of decimal_mark when you set the grouping_mark to “.”?

      # As above shows the decimal character 
      # set to , in code
      parse_number("5.123.456,78", 
                   locale = locale(grouping_mark = "."))
      [1] 5123457
      parse_number("5.123.456,78", 
                   locale = locale(decimal_mark = ",",
                                   grouping_mark = "."))
      [1] 5123457
      problems(parse_double("5.123.456,78", 
                            locale = locale(
                              decimal_mark = ",",
                              grouping_mark = ".")))
      # A tibble: 1 x 4
          row   col expected               actual      
        <int> <int> <chr>                  <chr>       
      1     1    NA no trailing characters 5.123.456,78
  3. I didn’t discuss the date_format and time_format options to locale(). What do they do? Construct an example that shows when they might be useful.

    Dates and times may be specified in numerous ways. Dates for example can be YYYY-MM-DD or MM-DD-YY.

    Times are the same. It may be hh:mm:ss or hh:mm AM/PM.

    It may be useful to use this if you know your data contains dates / times in a different for from the default.

    locale("zu")
    <locale>
    Numbers:  123,456.78
    Formats:  %AD / %AT
    Timezone: UTC
    Encoding: UTF-8
    <date_names>
    Days:   Sonto (Son), Msombuluko (Mso), Lwesibili (Bil), Lwesithathu (Tha),
            Lwesine (Sin), Lwesihlanu (Hla), Mgqibelo (Mgq)
    Months: Januwari (Jan), Februwari (Feb), Mashi (Mas), Apreli (Apr), Meyi (Mey),
            Juni (Jun), Julayi (Jul), Agasti (Aga), Septhemba (Sep),
            Okthoba (Okt), Novemba (Nov), Disemba (Dis)
    AM/PM:  Ekuseni/Ntambama
    locale("af")
    <locale>
    Numbers:  123,456.78
    Formats:  %AD / %AT
    Timezone: UTC
    Encoding: UTF-8
    <date_names>
    Days:   Sondag (So), Maandag (Ma), Dinsdag (Di), Woensdag (Wo), Donderdag (Do),
            Vrydag (Vr), Saterdag (Sa)
    Months: Januarie (Jan.), Februarie (Feb.), Maart (Mrt.), April (Apr), Mei
            (Mei), Junie (Jun), Julie (Jul), Augustus (Aug), September
            (Sep), Oktober (Okt), November (Nov), Desember (Des)
    AM/PM:  vm./nm.
    locale_af <- locale("af", date_format = "%m %d",
           time_format = "%H:%M")
    
    read_csv("a, date, time
             x, Mrt. 23, 20:30
             y, Okt. 31, 15:45", locale = locale_af)
    # A tibble: 2 x 3
      a     date    time  
      <chr> <chr>   <time>
    1 x     Mrt. 23 20:30 
    2 y     Okt. 31 15:45 
  4. If you live outside the US, create a new locale object that encapsulates the settings for the types of file you read most commonly.

    locale_za = locale("en", date_format = "%Y-%m-%d",
                       time_format = "%H:%M:%S")
  5. What’s the difference between read_csv() and read_csv2()?

    • read_csv2() has ; as a delimiter.
    • read_csv() has , as a delimiter.
  6. What are the most common encodings used in Europe? What are the most common encodings used in Asia? Do some googling to find out.

    UTF-8 is the standard it seems. Other than this:

    • Europe seems to be Latin-1.

    • Asia seems to be Shift-JIS

  7. Generate the correct format string to parse each of the following dates and times:

    d1 <- "January 1, 2010"
    d2 <- "2015-Mar-07"
    d3 <- "06-Jun-2017"
    d4 <- c("August 19 (2015)", "July 1 (2015)")
    d5 <- "12/30/14" # Dec 30, 2014
    t1 <- "1705"
    t2 <- "11:15:10.12 PM"
    d1 <- "January 1, 2010"
    parse_date(d1, "%B %d, %Y")
    [1] "2010-01-01"
    d2 <- "2015-Mar-07"
    parse_date(d2, "%Y-%b-%d")
    [1] "2015-03-07"
    d3 <- "06-Jun-2017"
    parse_date(d3, "%d-%b-%Y")
    [1] "2017-06-06"
    d4 <- c("August 19 (2015)", "July 1 (2015)")
    parse_date(d4, "%B %d (%Y)")
    [1] "2015-08-19" "2015-07-01"
    d5 <- "12/30/14" # Dec 30, 2014
    parse_date(d5, "%m/%d/%y")
    [1] "2014-12-30"
    t1 <- "1705"
    parse_time(t1, "%H%M")
    17:05:00
    t2 <- "11:15:10.12 PM"
    parse_time(t2, "%I:%M:%OS %p")
    23:15:10.12

readr Strategy

The readr 📦 uses the first 1000 entries in a column to guess the type of data contained in that column. This may work okay for most cases, but if you have sorted a file in a certain way (e.g. all missing values up top) then you may end up with a column inference that doesn’t quite work.

guess_parser("2010-10-01")
[1] "date"
guess_parser("15:01")
[1] "time"
guess_parser(c("TRUE", "FALSE"))
[1] "logical"
guess_parser(c("1", "5", "9"))
[1] "double"
guess_parser(c("12,352,561"))
[1] "number"
str(parse_guess("2010-10-10"))
 Date[1:1], format: "2010-10-10"
challenge <- read_csv(readr_example("challenge.csv"))
problems(challenge)
# A tibble: 1,000 x 5
     row col   expected       actual   file                                     
   <int> <chr> <chr>          <chr>    <chr>                                    
 1  1001 y     1/0/T/F/TRUE/~ 2015-01~ 'C:/Users/vebashini/Documents/R/win-libr~
 2  1002 y     1/0/T/F/TRUE/~ 2018-05~ 'C:/Users/vebashini/Documents/R/win-libr~
 3  1003 y     1/0/T/F/TRUE/~ 2015-09~ 'C:/Users/vebashini/Documents/R/win-libr~
 4  1004 y     1/0/T/F/TRUE/~ 2012-11~ 'C:/Users/vebashini/Documents/R/win-libr~
 5  1005 y     1/0/T/F/TRUE/~ 2020-01~ 'C:/Users/vebashini/Documents/R/win-libr~
 6  1006 y     1/0/T/F/TRUE/~ 2016-04~ 'C:/Users/vebashini/Documents/R/win-libr~
 7  1007 y     1/0/T/F/TRUE/~ 2011-05~ 'C:/Users/vebashini/Documents/R/win-libr~
 8  1008 y     1/0/T/F/TRUE/~ 2020-07~ 'C:/Users/vebashini/Documents/R/win-libr~
 9  1009 y     1/0/T/F/TRUE/~ 2011-04~ 'C:/Users/vebashini/Documents/R/win-libr~
10  1010 y     1/0/T/F/TRUE/~ 2010-05~ 'C:/Users/vebashini/Documents/R/win-libr~
# ... with 990 more rows
tail(challenge)
# A tibble: 6 x 2
      x y    
  <dbl> <lgl>
1 0.805 NA   
2 0.164 NA   
3 0.472 NA   
4 0.718 NA   
5 0.270 NA   
6 0.608 NA   

When you encounter problems:

  1. Work column by column to fix.
  2. Copy and paste the specification that read_xxx() inferred and use it to adjust the columns you experienced issues on. Just a note here: in the old version of readr the x column had integers up front, and hence was inferred as integer. Now it is inferred as double so the file only has an issue with the second column that has many NA values up top, and hence is considered incorrectly to be a logical when it is in fact a date.

I am a little perplexed by this though because the first 1000 rows are only integers. I would guess that the heuristic has been amended and maybe integer has been removed?

challenge <- read_csv(
  readr_example("challenge.csv"), 
  col_types = cols(
    x = col_double(),
    # change here from col_logical() to
    # col_date()
    y = col_date() 
  )
)
tail(challenge)
# A tibble: 6 x 2
      x y         
  <dbl> <date>    
1 0.805 2019-11-21
2 0.164 2018-03-29
3 0.472 2014-08-04
4 0.718 2015-08-16
5 0.270 2020-02-04
6 0.608 2019-01-06
challenge %>% 
  head(1010) %>% 
  tail(11)
# A tibble: 11 x 2
          x y         
      <dbl> <date>    
 1 4548     NA        
 2    0.238 2015-01-16
 3    0.412 2018-05-18
 4    0.746 2015-09-05
 5    0.723 2012-11-28
 6    0.615 2020-01-13
 7    0.474 2016-04-17
 8    0.578 2011-05-14
 9    0.242 2020-07-18
10    0.114 2011-04-30
11    0.298 2010-05-11
challenge %>% 
  head(1000) %>% 
  filter(as.integer(x) != x)
# A tibble: 0 x 2
# ... with 2 variables: x <dbl>, y <date>
challenge %>% 
  head(1000) %>% 
  filter(x%%1!=0)
# A tibble: 0 x 2
# ... with 2 variables: x <dbl>, y <date>
challenge %>% 
  tail(1000) %>% 
  filter(as.integer(x) != x)
# A tibble: 1,000 x 2
       x y         
   <dbl> <date>    
 1 0.238 2015-01-16
 2 0.412 2018-05-18
 3 0.746 2015-09-05
 4 0.723 2012-11-28
 5 0.615 2020-01-13
 6 0.474 2016-04-17
 7 0.578 2011-05-14
 8 0.242 2020-07-18
 9 0.114 2011-04-30
10 0.298 2010-05-11
# ... with 990 more rows
(test_file <- read_csv("a, b, c
# Trying to confirm if integers get read in a double?
                      1, 2.3, 'x'
                      2, 5.4, 'y'
                      3, 6.7, 'z'"))
# A tibble: 4 x 3
  a                                                         b c    
  <chr>                                                 <dbl> <chr>
1 # Trying to confirm if integers get read in a double?  NA   <NA> 
2 1                                                       2.3 'x'  
3 2                                                       5.4 'y'  
4 3                                                       6.7 'z'  

Other strategies for read issues

Guess a few more rows

challenge2 <- read_csv(readr_example("challenge.csv"), 
                       guess_max = 1001)
challenge2
# A tibble: 2,000 x 2
       x y         
   <dbl> <date>    
 1   404 NA        
 2  4172 NA        
 3  3004 NA        
 4   787 NA        
 5    37 NA        
 6  2332 NA        
 7  2489 NA        
 8  1449 NA        
 9  3665 NA        
10  3863 NA        
# ... with 1,990 more rows
challenge2 %>% 
  head(1010) %>% 
  tail(11)
# A tibble: 11 x 2
          x y         
      <dbl> <date>    
 1 4548     NA        
 2    0.238 2015-01-16
 3    0.412 2018-05-18
 4    0.746 2015-09-05
 5    0.723 2012-11-28
 6    0.615 2020-01-13
 7    0.474 2016-04-17
 8    0.578 2011-05-14
 9    0.242 2020-07-18
10    0.114 2011-04-30
11    0.298 2010-05-11

Read in all as character columns

Then use type_covert() to help identify column types.

challenge2 <- read_csv(readr_example("challenge.csv"), 
  col_types = cols(.default = col_character())
)
# Reminder: the extra brackets around do the assignment 
# and print out the results
(df <- tribble(
  ~x,  ~y,
  "1", "1.21",
  "2", "2.32",
  "3", "4.56"
)) 
# A tibble: 3 x 2
  x     y    
  <chr> <chr>
1 1     1.21 
2 2     2.32 
3 3     4.56 
# Note the column types
type_convert(df)
# A tibble: 3 x 2
      x     y
  <dbl> <dbl>
1     1  1.21
2     2  2.32
3     3  4.56

The authors recommend that you always supply col_types = cols() ourselves for consistency and detecting any data changes. You can use readr’s guesses and build from that.

Writing files

  • write_csv(dataset, “file path”)
  • write_tsv(dataset, “file_path”)
  • write_excel_csv(dataset, “file_path”). I did not know about this one! 🙏

The only issue with the above is you need to recreate the specification on every subsequent read.

  • write_rds() [to read use read_rds()] stores data in R’s custom binary format called RDS.
  • write_feather() [and read_feather()] from the feather 📦 allows files to be shared across languages.

sessionInfo()
R version 3.6.3 (2020-02-29)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19042)

Matrix products: default

locale:
[1] LC_COLLATE=English_South Africa.1252  LC_CTYPE=English_South Africa.1252   
[3] LC_MONETARY=English_South Africa.1252 LC_NUMERIC=C                         
[5] LC_TIME=English_South Africa.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] hms_0.5.3       magrittr_1.5    lubridate_1.7.9 emo_0.0.0.9000 
 [5] flair_0.0.2     forcats_0.5.0   stringr_1.4.0   dplyr_1.0.2    
 [9] purrr_0.3.4     readr_1.4.0     tidyr_1.1.2     tibble_3.0.3   
[13] ggplot2_3.3.2   tidyverse_1.3.0 workflowr_1.6.2

loaded via a namespace (and not attached):
 [1] tidyselect_1.1.0 xfun_0.13        haven_2.3.1      colorspace_1.4-1
 [5] vctrs_0.3.2      generics_0.0.2   htmltools_0.5.0  yaml_2.2.1      
 [9] utf8_1.1.4       rlang_0.4.8      later_1.0.0      pillar_1.4.6    
[13] withr_2.2.0      glue_1.4.2       DBI_1.1.0        dbplyr_2.0.0    
[17] modelr_0.1.8     readxl_1.3.1     lifecycle_0.2.0  munsell_0.5.0   
[21] gtable_0.3.0     cellranger_1.1.0 rvest_0.3.6      evaluate_0.14   
[25] knitr_1.28       ps_1.3.2         httpuv_1.5.2     fansi_0.4.1     
[29] broom_0.7.2      Rcpp_1.0.4.6     promises_1.1.0   backports_1.1.6 
[33] scales_1.1.0     jsonlite_1.7.1   fs_1.5.0         digest_0.6.27   
[37] stringi_1.5.3    rprojroot_1.3-2  grid_3.6.3       cli_2.1.0       
[41] tools_3.6.3      crayon_1.3.4     whisker_0.4      pkgconfig_2.0.3 
[45] ellipsis_0.3.1   xml2_1.3.2       reprex_0.3.0     assertthat_0.2.1
[49] rmarkdown_2.4    httr_1.4.2       rstudioapi_0.11  R6_2.4.1        
[53] git2r_0.26.1     compiler_3.6.3