Mar 10, 2014

Speed reading at 600 words per minute

The internet was abuzz with news on the speed reading technology by Spritz Technology. Many showed demo videos where words are presented at very high frequency, faster than what is usually achieved by so-called speed reading techniques. While Spritz seems intent to make money over licenses rather than selling applications themselves, there are already many clones that allow you to read very fast.

Typically, speed reading courses aim to get you to speeds over about 200 words per minute. I read a book on speed reading some time ago, and it boiled down essentially on concentrating harder on the main points. I don't think I profited much from reading it. With the technique advertised by Spritz you can make it beyond 600 words per minute by delivering the words in high frequency to locations that help you avoid eye movements. Some news sources have pointed out that this means you can read a novel in about 90 minutes. This is spectacular of course.

A perl command line tool is available at github. For google chrome, there's the jetzt plugin, which works really well and looks fabulous. Installation is straightforward. The word frequency is by default at 400 words per minute.

I like the chrome extension. However, if you use the chrome extension, there's a downside: chrome doesn't open all filetypes, and it is not even consistent - sometimes txt files get opened, sometimes they are downloaded. You can (I did) read books using this extension by converting them to text (e.g. pdftotext), and loading them into the browser. With calibre you can also directly convert to html (ebook-convert input.pdf output.html).

From text you can to html, just by slapping html tags around it, but for readability it's better to trust tools such as txt2html. For e-books in epub format you can use epubtohtml to convert them to html.

Please also see my post on language learning to get more ideas of the possibilities.

Enjoy. Please leave a comment below for questions and suggestions.

Boot time in Ubuntu with Dropbox

I remember, some time ago, my system used to boot in few seconds and I even tried to optimize the time by installing minimal window systems. I had some friends who bought solid state hard disks, who came down to five seconds. Some linux distributions focused explicitly on boot time. Now, my booting time is extremely slow and I have to wait up to several minutes for my computer to boot, even on my new desktop computer with a ...standard installation. A significant part of the booting time is taken up by dropbox and in this post I explain what I did to bring down this time and to get at least a bit more reasonable boot time.

My ubuntu startup is sometimes extremely slow. When I check the CPU usage, dropbox comes up as one of the heaviest shares. What to do? One option is to change the nice level (process priority) of dropbox. But then, why not use a more specific limit? cpulimit attempts to limit the cpu usage of a process, so let's use that with dropbox!

We start by installing cpulimit. In ubuntu

sudo apt-get install cpulimit

... will do

We need to identify the process which we want to limit. One way is by process id (pid). Let's get the dropbox process id:

ps aux | grep dropbox | grep -v grep | awk '{print $2}'

There is another option to use the absolute path, but I am using the pid here.

Now we need to plug this into the cpulimit syntax, which is as follows:

cpulimit --pid= --limit=

The command can now look like this:

cpulimit --pid=$(ps aux | grep dropbox | grep -v grep | awk '{print $2}') --limit=10

In the terminal, I put an ampersand at the end, to run the command in the background.

For now, everything seems to run smoothly. Dropbox' cpu share is reduced and the system responses seem quicker. Let's see how it works...

I enter this command in the gnome menu of startup applications, behind the dropbox command. Combined it can look as follows:

echo "nice dropbox start -i & cpulimit --pid=$(ps aux | grep dropbox | grep -v grep | awk '{print $2}') --limit=10" | at now + 10 min

Enjoy. Please leave a comment below for questions and suggestions.

Ubuntu for research (again)

I had a lot of problems with my computers. I had to reinstall my system several times last week. My hard disk was exchanged so today I am doing it again. So here's how to reinstall an ubuntu system. To see more details, please have a look at my last post on the topic.

Become superuser:

sudo su

For google chrome:

wget -q -O - | sudo apt-key add - sh -c 'echo "deb stable main" >> /etc/apt/sources.list.d/google.list'

For additional ubuntu partner repositories:

apt-add-repository "deb $(lsb_release -sc) partner"

Update your sources:

apt-get update

Here I include many libraries and programs that you could need for research. See more explanations in my older article. I always install dropbox.

export DEBIAN_FRONTEND=noninteractive
apt-get -y --force-yes install aptitude openssh-server build-essential gcc gcc-doc apt-file gsl-bin gsl-doc-pdf gsl-ref-html libgsl0-dev gsl-bin gsl-doc-pdf libgsl0-dbg libgsl0ldbl glibc-doc libblas-dev maxima maxima-share subversion subversion-tools git screen octave texlive untex luatex perl fontforge context-nonfree context-doc-nonfree dvipng imagemagick graphviz gnuplot-x11 gnuplot-doc gnuplot libatlas3gf-base kdevelop kate kile vim-gtk vim vim-addon-manager vim-common vim-doc vim-latexsuite latex2html latex-beamer xpdf writer2latex texlive-lang-german texlive-lang-spanish jabref bibutils hevea hevea-doc wordnet cups-pdf djvulibre-bin djvulibre-plugin inkscape scribus pdf2djvu pdf2svg pdftk python-gdbm ipython python3-dev python3-all python-scipy unrar tofrodos epiphany-browser epiphany-extensions scribes lyx claws-mail claws-mail-i18n claws-mail-doc claws-mail-tools libqt4-core libqt4-gui ubuntu-restricted-extras regionset soundconverter gxine libxine1-ffmpeg libstdc++5 libmms0 vim aptitude zim gimp wine colordiff moreutils nautilus-dropbox trash-cli calibre chromium-browser epstool transfig hplip

Enjoy. Please leave a comment below for questions and suggestions.

Jan 25, 2014

Learn a Language - Example: Greek

Ok, so you decided you want to learn a language. You've read an introductory book, went through some audio course. Now what? Vocabulary! Some pointers to get you started.

You can use a vocabulary trainer, preferably one with spaced repetition, either on-line, e.g. memrise or offline e.g. with anki.

For many languages, there are vocabulary lists available which you can download, import, and study. These lists can be more or less useful, well or badly curated, so as a beginner you often don't know how much noise you are studying.

What you want are vocabulary lists that list the most important words, preferably grouped into subject categories, so you can study them, understand them in context, and apply them. This is the optimal case, but not the usual one. If you have to create you own lists, there are several ways to start:

1. learning by doing: as you go and interact in the language you note down the words and enter them in a list.
2. import the most frequent words and learn them.

If you edit word lists for yourself, because there can be many misunderstandings, failed lookups, and misspellings, to name just a few sources of error there can be a lot of noise. Also it's a lot of work to curate a list.

There are many word lists, some of which well curated, which you can use. You can even use automatic dictionary lookup to create your lists, e.g. using freedict. You can also make use of word frequency lists or subsets of vocabularies such as Basic English.

In the following code example, I show how to create a word list of 250 Greek Verbs, filtered from a curated list by word frequency.

I downloaded both files as el_word_frequency.txt and Greek_verbs.txt. Words in the frequency file are ordered by frequency.

#!/bin/bash rm -f most_frequent_verbs.txt # output file
for word in $(cat el_word_frequency.txt | awk '{print $1}')
echo "word: ${word}"
found=$(grep -i "^${word}," Greek_verbs.txt)
if [ $? -ne 1 ]
then # found
echo "identified as Verb with meaning"
found=$(echo "$found" | head -n 1)
echo "${found}" >> most_frequent_verbs.txt
#echo "${found}"
echo "found: ${found}"
head -n 250 most_frequent_verbs.txt | sed -e "s/\(^[^ ,\/]*\).*$/\1/" > most_frequent_verbs500.txt1
head -n 250 most_frequent_verbs.txt | sed -e "s/.*\([0-9][0-9]*\|vt\|vi\)[\., ]*\(.*\)$/\2/" > most_frequent_verbs500.txt2
paste most_frequent_verbs500.txt1 most_frequent_verbs500.txt2 > most_frequent_verbs500.txt

I also generated sound files for audio vocabulary courses, however the result of the automatic text-to-speech for Greek was too bad to be useful.

Please also see my post on speed reading to get more ideas.

I wish you success and fun! If you have any improvements or similar ideas, I might be interested in hearing about them.