Dragon Professional Individual Version 6 – First Impressions

Getting started with Dragon Professional Individual for Mac version 6

DragonDictate Professional IndividualThe installation of Dragon Professional Individual was pretty straightforward. The .DMG file opened itself up and all I had to do was to double-click on the icon in the window. After Dragon opened up the first time I had to upgrade the profile to this latest version. I expect this is a one-way process and just in case you have any problems I recommend you have backups. It’s only sensible to have a good backup plan in process anyway. The next thing I had to do was to set up the microphone. I was given a text to read and that went okay. It would have been nice if there was some sort of feedback to show the audio of my voice was going in okay. In previous versions it put a colour on the text as you read it. With this one there wasn’t even a reading on a meter to let me know that the gain setting on the microphone was okay. Following that it took some time before the application was usable. It may have been a minute or even 90 seconds before the spinning beach ball went away and I could start dictating. I was getting a little worried I might have to kill the application and start again.

A guidance window has popped up telling me what I can do with DragonDictate. It’s telling me things like

  • “Cancel Correction” to cancel
  • “Edit number” to dictate a correction
  • “Spell That” to spell a correction
  • “Play The Selection” to hear what you said

Dragon Professional Individual V6There is a new little box  in Dragon Professional Individual which gives you the icon for the microphone. This lets you see when the microphone is working, asleep or turned off. The icon in the centre is where you can choose from Dictation, Command, Spelling or Numbers. The icon changes to show you which mode you’re working in.

The third of these icons lets you show or hide the extra windows. There is a correction window, there is the guidance window and there is a commands window which is not showing up for me at the moment. In the guidance window it’s telling me “Ulysses does not support mixing dictation and typing in this area.” (Writing in Ulysses) Also within this guidance dialogue window there are two buttons, one for Commands and the other is for Help. The help button takes you to a webpage in your default browser. The command button still doing nothing at all for me.

Accuracy and speed

I’m using a profile which was updated from the previous version and so far I have no complaints about the accuracy of the speech to text. There have been only a couple of words so far that the DragonDictate application hasn’t recognised. I was able to spell out a correction and continue with my dictation no problem at all. The application was giving me the word metre as in the European spelling of the measurement of length when I wanted the word meter which is a type of gauge. The new version of DragonDictate is supposed to have deep learning, machine learning and should offer me both of these words to choose from. It isn’t doing so yet, but maybe this just has to be something I look out for later. Could be too soon to say.

It’s hard to assess the speed of the application when it’s converting the speech into text. It does feel a little bit quicker, but it’s not something you can put a timer onto and say for definite whether it is faster or not.

Making corrections

I’m having the same sort of problem I had before with DragonDictate version 5. I tell it to go to a specific point in the text and I say delete word, sometimes it will delete just part of a word. This shows that the application has lost track of the words put into the document so far. I have been careful not to mix any typing in with the dictation. So I’m not quite sure why it should be doing this. It’s easy to remedy by issuing a “Cache Document” and continue with the dictation, writing.

In the Correction window of DragonDictate Professional Individual you get a number of choices and in most cases the first choice is what you’ve just said all in lowercase. The second choice is the same, but with title case and the third choice will be the same again but all in Upper Case letters. There will be other options available when the software is not absolutely sure what it was you said or where there are homonyms.

Overview of my first impressions

In this dictation session using Dragon Professional Individual I’ve been dictating into Ulysses and I’m happy enough with the way it is working. I’ve just given it a test in the application TextEdit and the message about not using the keyboard at the same time as dictation was not showing. I gave it a test by typing in a sentence and then followed it with some dictation. That didn’t give any problems and then I further tested it by telling DragonDictate to select the text I typed. I was shocked and impressed DragonDictate was able to do that without even batting an eyelid. I didn’t expect DragonDictate to know that the text existed within the document.

Writing – dictating in Scrivener

This is another test of Dragon Professional Individual V6 by typing or dictating into Scrivener. Same as when using DragonDictate in TextEdit, I’m not getting that extra message to say that I mustn’t use the keyboard at the same time as using dictation. The words are popping into the document extremely quickly and once again I am impressed. This could have me using Scrivener more than I’m using Ulysses. I’ll definitely be giving this some more testing in the Scrivener application. I would say this is an improvement on how DragonDictate used to work with Scrivener previously. This makes the upgrade to version 6 of Dragon – Dragon Professional Individual version 6 well worth the money.

scrivener

I am tempted to start up a new DragonDictate profile to go with this new version of the application. A new profile only in the interests of testing it. So far while using the profile which was converted from the version 5 of Dragon, everything seems to be going great. Another thing I quite like about using Scrivener rather than using Ulysses is that I’m getting a word and character count in the menu bar at the bottom of the screen. To see the word count in Ulysses I have to use Command and 7. It’s much more immediate to be able to see it directly on the screen without having to press any buttons.

A Big Thumbs Up for version 6

There are still more things to test, such as the transcription facilities. I have only been working with this for about an hour or so. This really is a down and dirty first look/first impression of the application and I will need to give it some more time. There will be another test coming up when Mac OS Sierra is launched. I prefer to have the latest version of the operating system software on my computer. The word from Nuance is that they haven’t been testing Dragon Professional Individual V6 with macOS Sierra and so we just have to have our fingers crossed. If it doesn’t work with the new operating system from Apple I would be inclined to wait to do the upgrade to Mac OS Sierra. The benefits I get from using this excellent speech to text software for my writing is much more important to me than what we have coming new from Apple.

Posted in Good and Geeky.

Good and Geeky or Bad and Geeky

Taking the good and geeky too far

I like geeky stuff but I am a realist about it and I want the tech to improve my life. I have noticed some geeks of the world go a bit too far an exhibit extreme geekiness. Listening to a podcast lately in which the podcast hosts chat about Apple tech. It is often interesting due to information they come up with about using iOS and the Mac. This is mostly due to the Italian guy who is a proponent of using the iPad as his main computer. He really does do just about everything with the iPad Pro. Federico shows he can get it all done with the latest iOS operating system and the fantastic apps available. This is the Good and Geeky side of the podcast. The Bad and Geeky is the obsessions of the other hosts. One has a collection of all the iMacs Apple have made. Some are basically the same machine but in different colours. Why would anyone need to have an old computer and duplicated in the Bondai Blue and other colours? The computers are not really going to get used as the operating system is out of date. All you can do is to put them on a shelf and look at them. People will look at you and think you’ve taken leave of your senses. The good and geeky person will be finding new ways to use technology to get things done. The proper geek will revel in having tech solve problems in real life. The other podcast has his equivalent of the stamp collection or beer bottle top collection in the form of pens. Everyone needs a hobby I suppose.

Obsessing on the details

Same three bunch of podcasters will buy multiple keyboards and be obsessive about the amount of travel of the keys. The layout of the keys is super interesting so they have to have one with a British layout and another with the US layout. It could be they are going into to depth as tech reporters and experiencing these keyboards for the benefit of the listeners. On the other hand they could be going over the top in the geek department. They talk for ages on the subject when normal people just get a keyboard and gets on with it. Doing real life stuff and adapting to the hardware they’ve got.

Good and geeky apple keyboard

The Apple fanboys

The Apple fanboys can talk the hind legs off a donkey when it comes to discussing the intentions of the computer company. Making guesses about the why and how Apple have implemented some tech. Often the Apple podcasters will talk for hours on stuff Apple might do. Rumours are circulated and the Apple fan boys obsess on things that may never happen. While doing this pronouncements are made like, “That’s not the way Apple does things”. Non of these fan boys have any idea because they don’t have access to the decision makers in Apple. It seems daft to waste time discussing any of the rumours and much better to wait until proper announcements are made, tech is presented to the public and there is something concrete to pull apart.good and geeky messages

Getting excited over nothing – In the latest Operating system upgrades coming macOS Sierra and iOS 10 some twiddly bits are being added to the messages apps. I have been amazed at the level of excitement by the Apple podcasters. There is nothing earth shattering about fluffy additions to make messages look pretty. No major changes to how the basics of how we communicate and the Apple commentators have been apoplectic in discussing the changes. Technology does have the possibility to change lives by making things easier and better. I suppose adding some richness to the communication system could be handy. I wonder how it will all work out in the real world when the new operating system updates are rolled out to the public? At least we can say how much better it is now we all have these computers in our pockets. I remember when I had to leave the house and walk fifty metres to get to the public telephone box down the road. Maybe that’s why I love being Good and Geeky.

Posted in Good and Geeky.

Learning to Code in Swift on the iPad

Working on learning to code on my iPad

I now have three days in a row using TapCoding on the iPad

Today I looked at the code for making functions. I finally started to see the light about internal and external names for the parameters. I may go through this set of lessons again in a weeks time to further reinforce these concepts into my memory and hopefully I’ll be able to do some proper coding using functions.

Next I will go through the Data 1 and redo the three lessons within that area.

I need to relearn the lessons about using arrays. It seems to be a good idea for me to keep going over the lessons until I can go through the tests without making any mistakes. I can’t really build upon previous lessons if I don’t fully understand what is going on.

Finding it a good idea to use Xcode

One of the things I find difficulties with is to find the lines of code with problems in the TapCoding application tests. It’s a good idea for me to type the code into Xcode and it gives me a clue as to where the problems are. The good thing about using Xcode is that when you want to put in a function after you is enter to complete the code entry it gives you the rest of the function for you to complete. All you have to do is to tap from one section to the next. This makes coding a lot easier and gives you an idea of what bits you need to include.

Posted in Good and Geeky.

Voice First Interaction Talking to computers

Voice First interaction

It’s only natural that our interactions are verbal and voice-activated. It’s been that way for thousands of years. Since the computer era has started, we’ve had to work out how to interact with computers. Going from the old days of having to use holes punched in card, moving on through tape and magnetic discs and moving further forward to using keyboards to communicate digitally. More than 98% of people probably still consider keyboards as the only way to communicate with computers. I’m seeing more people wanting to use speech to text software to get the data in. Voice first interaction is the way forward. When you’ve got great software like DragonDictate, then why would you want to use a keyboard to give yourself carpal tunnel syndrome. I can already use Dragon for Mac to control my computer to a certain extent. I use it to open applications and to close them. I have set up commands within some applications. I have an application called Day One, a journalling application. When I’m using this application, I can use the command “New Entry” and that’s exactly what will happen. Then I can dictate the entry. It is nice to sit back in my office chair, put my feet up on the desk and be completely relaxed while entering data. Dragon Dictation is super fast, being 3 to 4 times faster than typing. Now we will take it a stage further with the addition of Siri to the Mac as from macOS Sierra. We’ll be able to do the same sort of thing as we do with our iOS devices. Ask questions for information, give instructions like booking a table in a restaurant, ordering a taxi. Our desktop computers will have the access to artificial intelligence and a voice first interaction. Just because it’s on a desktop machine, it doesn’t mean we will be having conversations. At least not yet anyway, that’s the direction we are going in. The future is for voice first interaction communications with computers. Maybe in ten or twenty years time keyboards will be something we’ll laugh at and say how strange and funny it was we used to use those weird things. We will all have a digital assistant we can talk to and have conversations with. They may end up being our special friend as well as providing us with computer-based services of one sort or another.

Hey Siri - Voice First Interaction

Our special friend always with us!

There’s a movie called ‘Robot and Frank’ in which an elderly gentleman is given a robot and at first he doesn’t like it. After a while he gets used to the robot and has conversations with it and then involves the robot in his illegal activities. It turns out Frank is a retired jewel thief and he uses the robot to enjoy some of his glory days of stealing diamonds. The robot becomes his sidekick and helps him do things he wants to do, although it’s antisocial to an extent and just plain wrong. Within the movie there is a moral to the story. Will we have these sorts of artificial intelligence interactions and get moral guidance from Voice First computer systems?

voice first interaction

Dragon Dictation

We’ll get used to talking to our computers and our internet of things. In the past, some people were scared of talking on the telephone. There are still people scared of using computers. Hard to believe but true. In the future we’ll get past our awkwardness of talking to our computer devices. I know I’m more self-conscious of dictating using Siri when there are other people present than when I’m on my own. I quite like to show off with Siri. I ask Siri what the weather is going to be when I’m at work, customers want to know if it’s going to be sunny or not. Those I’m giving the information to in this way are usually quite impressed with how I gather it. Children have already got used to this idea of talking to machines, they are digital natives. Parents and teachers have observed kids talking to the intelligent assistant and having a conversation. So far as the limits of what they can do with Siri, Amazon or Cortana at this present time with techno chat. I could see it would comfort a lonely person to have an artificial intelligence to talk to. No judgment or arguments, just a back and forth of communication between human and AI. I suppose you can shout at it if you are having a grumpy day, but it won’t shout back. It might even have strategies to calm you down – “Calm down love, go and put the kettle on and let’s have a cup of tea, shall we?” Hopefully it will not sound too patronising or it would likely be thrown across the room and end up in a million pieces on the floor.

 

Artificial intelligences talking to each other

This phenomenon has already been tested where device owners have got Siri to talk to the Amazon Echo. The conversation doesn’t go back and forth to many times, but there is interaction between one bot and another! Now, these sort of things are done for curiosity and humour. Will this inter-device communication will work in real life eventually? How will the competing technologies get on with each other? Will it be possible for these voice assistants to use machine learning to identify if they’re talking to another robot or to a human being? We may have to tell these interactive voice assistants to shut the hell up. “Please be quiet robot so I can get on with what I’m doing with no interference from you”.

Taught by our artificial intelligent friend

When I ask Siri questions that are too difficult for it at the moment it only sends me to a webpage with more information. There will come a time when I can ask my special friend to tell me about something and I’ll get a proper answer. It is already possible to do this to an extent with the Amazon Echo and Alexa. This whole area of Voice First Interaction is still in its infancy. There is a long way to go before we’ll get something that works as well as we would like it to. In the future our children will have an intelligent artificial special friend who will follow us from birth onwards. Children will get as much learning, information and help from the robot as they do from their parents and teachers. Maybe even more in some cases.

Voice first interaction

Voice assistants – Do we have to have a winner?

There are competing technologies in this space. Google Home, Apple’s Siri, Microsoft’s Cortana, Viv AI from the people who were originally working with Siri, as well as technologies and companies as yet unheard of. Competition like this is what we want to give us an arms race of digital voice assistants. As each of these companies each try to outdo the other the products keep getting better for the consumers. It is possible that in the end it won’t really matter which hardware or software we have. The job will get done sufficiently well by whichever one you use. Money will be involved in time during this process. These systems and services will have to monetise. I hope it will be with a one-off easy payment and none of this having to pay monthly to get the service we want.

What about the security and privacy?

To get our Voice First interaction with our digital assistants we will have two configure them to listen to us at all times. The computers will only be able to listen to us and serve us if the microphones are always switched on. It might be a good idea to not have these microphones enabled in the bedroom! Each of these companies will have to give us promises with regards our security and privacy. Apple will go down the route of differential privacy. Other companies will use other methods of anonymising our data. I expect there will be a proportion of society who will never ever use voice-based services. Whatever promises given will not be enough to calm those paranoid minds. I would like to use this sort of technology. So I would think along the lines of allowing my voice data to be used to give me the services. If in time my trust in the company providing the services breaks, then I would stop using it. Hopefully it wouldn’t be too late by that time!

Machine learning – Voice First Interaction

In the past we have experienced making phone calls to companies when the unintelligent voice system could not understand us. You have to try endless repetitions because the system doesn’t remember what you’ve said already. The system is not clever enough to work out what you might want instead. It gets even worse when the system is poorly designed and we end up in a loop. It’s at times like this when we pull our hair out and just give up. We really need is for the computer systems to have machine learning. Have the computer remember what it was you said before and work out probabilities of what you really want to ask. There may even come a time when we can expect these digital voice assistants to interpret our voice inflections. Assess our frame of mind and mood. Maybe if cameras are used, it could also take inferences from our body language. I’m not sure that many people would be too keen on having cameras trained upon them all the time. It depends upon who has access to whatever is recorded and the security of those recordings. It will be a balancing act between the perceived benefits of talking to our computers and whatever downsides there might be. People behave differently when they think they being observed. How 1984 Big Brother could it get and will there be ‘Thought Police’ in the future. There will be checks and balances plus choices to make on how it works in our lives. Just like we choose how much to tell Facebook or other social networks. Voice first interaction is coming. The best way to operate our computer will be voice first interaction. Tell your artificial intelligent assistant what you want to have happen. Allow the artificial intelligence to help you make it happen according to your wishes and best interests.

Posted in Good and Geeky.

The future is Artificial Intelligence

With the advent of chat bots and devices you put in your house to listen to you ask questions to do your bidding AI has to be in our futures. Like in the movie back to the future when a version of Marty enters the house and tells the TV wall to put on a load of TV channels. We’ll be able to do that sort of thing and more. It isn’t going to be as lame as it is now when all we can do is turn on light bulbs and let a thermostat regulate the temperature of the house. I am expecting something we could call intelligent. We are right to worry about privacy but, the device doesn’t have to listen and record everything. It just has to pay attention after the magic keyword is used to wake it up. Then it can answer to us and help us out in our lives. Better still if the AI in our homes can learn from our behaviours and adjust. Our lives can be improved and assisted by computer algorithms in the form of artificial intelligence.

What do we have so far?

  • Amazon Echo
  • Siri
  • Google Home
  • Viv

Some of those are promises of things to come, but we nearly have them to use and abuse. Siri will have to improve and maybe the new tech (Viv) from the people who made Siri will give us the big jump needed. Being able to ask a question which refers to a previous question will be marvellous. Artificial intelligence will get better at understanding the context of what we say. Queries on top of queries will be so useful. Having a computer understand what we want from plain natural language will be the game changer. The systems will be able to understand what we want and offer us intelligent choices. Perhaps it can give us options we never would have thought about and really be helpful in a proactive way. Not that we want a busybody know it all, telling us “I wouldn’t do it like that if I were you!” At the same time the thought of having a proper virtual assistant is appealing. If the AI hears the doors being slammed and lots of shouting it could start playing music we’ve used before for relaxing and calming down. It could ask if there is anything it could help with. It could just shut up and let us get on with it too.

Artificial intelligence is more than telling the computer to play our music

Artificial Intelligence will be us having a conversation with our computer. We will be augmenting and enhancing what we can do with our natural brains. Like when you work as part of a team of people or even when you just have a work partner you use to bounce ideas off. Two heads are better than one and a virtual assistant made from artificial intelligence will be better than that. Will we be able to configure the electronic Virtual Assistant to keep track of our previous thoughts and questions. It could let us know that we tried something before and it wasn’t too successful and perhaps we should try a different approach this time,

Getting the AI to help out

I would like the AI to remember what questions I made the day before and ask if I want the information again. When I wake up it could say “Good morning Dave. You have a few emails waiting for you. Here are the most important of them needing a response first. I’ve already replied to the message from your brother. I suggested to him a meeting on Friday would be better because you don’t have time on Thursday. Today is the day to pay the water bill, shall I give the OK to your bank account to pay it? The weather is looking good for the day and you won’t need to take the umbrella. By the way, you forgot to plug in the Tesla last night so I plugged it in for you. You have to use the car to get to your 5 o’clock meeting. Is there anything else you want me to organise for you?” Then the Artificial Intelligence device goes quiet and starts to play some music to aid the start of the day.

Some of those things the Artificial Intelligence might not have inform us about. It could just make sure the bill got paid on time and only let us know if we asked about it. Same with the car getting plugged in and charged up ready to go. It could be like the house elf did it for us and we only have to jump in the car with the confidence that it will have a full set of batteries.

At the moment the AI available to us can only do one command at a time. It will be handy when we can tell it to turn on the television and choose the channel, turn the lights down low, lock the doors, and turn on the popcorn machine. We might have to get around that by setting up a chain of events to one command – “Siri – Set up film night…”

Artificial Intelligence learning our behaviour

What will also be useful is for the environment to learn what we need and just do it. So if it is thursday night and we arrive at a certain time the house knows it’s likely we want a film night and asks us if we want to set everything up. All we need to do is dive onto the sofa and relax.

Will Artificial Intelligence compromise our privacy

Some people will be worried about AI being invasive. Privacy is important, there will need to be some checks and balances put in place. Any private information needs to be kept private. If information has to be used to make the Artificial Intelligence function fully, then the data should be anonymous. We don’t want outside entities such as big companies, government or criminals being able to use our data against us. I feel confident that these Artificial Intelligence systems will be protected and we won’t need to worry too much. I say this because we are all aware of what could go wrong and the software is growing up in a time when security is considered important.

Benefits to Artificial Intelligence

Allowing Artificial Intelligence in our lives has benefits. We all want to get more done and time is often running away from us. Using technology is one way we can be more efficient. It’ll be great to let computer intelligence take some of the load off our shoulders. It has to be set up right and used properly, but that’s the same with any tools we use. Artificial Intelligence is a tool that is coming and will let us get more from life. Boring and mundane tasks can be passed on to the AI so we can do something more interesting.

What will you ask Artificial Intelligence to sort out for you?

  • People using the Amazon Echo use it to ask questions and provide facts. So you could ask it what is the capital of a particular country.
  • Ask the Amazon echo to read your audiobook to you.
  • The Amazon echo will tell you jokes. They might not be that funny
  • Start and stop timers which would be handy if you’re in the kitchen and your hands are covered in flour.
  • Tel the device to play a specific music track you want to listen to.
Posted in Good and Geeky.

A home made Amazon Echo with a Raspberry Pi

A home made Amazon Echo with a Raspberry Pi

I’ve had this raspberry Pi for a wee while now and I initially got it for using with a camera to use for security for the house. About a month ago I saw a project online where you could use the Raspberry Pi to create your own home made Amazon Echo. That left me with a dilemma has to what I should do first with this tiny little computer. Whichever of these two projects I was going to do first time knew they were going to take some time to get working. On the webpage for this Raspberry Pi Amazon Echo project there are quite a lot of steps to be followed to get the job finished. It’s always a good idea to do a project like that when it is fairly freshly published. The reason is because things change over time and these projects don’t get updated to take account of changes made. What happens is, you get part way through the project find some inconsistency which leads to an enormous amount of head scratching. Just one small change can lead to damnation as far as a project like this is concerned. If everything does go to plan then it isn’t any bother at all to create your home made Amazon Echo. If it doesn’t go exactly to plan then you won’t find any help to get you around the problem that was created. I did run into a couple of difficulties along the way, but I did make it to the end. I now have a home made Amazon Echo I’ve been able to ask questions of. I’ve used it to set a timer and also to set an event in a calendar. I even persuaded it to tell me a joke. Not that it was terribly funny.

home made Amazon Echo

Why create a home made Amazon Echo?

Using and abusing and artificial intelligence through a stand-alone device which sits in your home seems to be the way things are going. Google has just announced it will be giving us a device called Google Home. Amazon have the Echo. New artificial intelligence software has recently been announced by the same group who made Siri. This new one is called Viv and we don’t really know yet how it’s going to be made available to us final users. Will Apple by it in the same way it bought Siri or will it be something that third-party developers will be able to integrate into their applications on whatever platform? It does seem quite appealing to have an artificial intelligence to do our bidding and all have to do is to talk to it. In just the same way as the Star Trek personnel talk to their computer or the astronauts from the old TV series Blakes Seven will be able to tell the computer system what to do. It doesn’t have to be something set in a spaceship it can also be like in the movie Back to the Future where Marty comes back home from work and talks to the computer. He tells the computer system of the house to turn on the television channels he wants to watch. We can do those sorts of things now by telling our virtual assistant residing in our artificial intelligence device was we want to have done. We can now have a smart home where we integrate our music and television services along with devices which measure temperature of the home. The smart home can control door locks and set up lighting scenarios and we can do this with devices from Amazon, Google and Apple. It is still early days and some parts of the system work better than others.

home made Amazon Echo

The Amazon Echo at the moment is only available in North America. It is possible to find a way around this to get one into Europe. If you do you don’t get the same amount of capabilities as you would by using it in America, but it is possible. I don’t think I’ll be using the Amazon echo system for too long as it is at the moment on my home made version of it. I find it too annoying to be given measurements in antiquated measuring systems like Fahrenheit and miles. I’d rather wait until we use the much more reasonable kilograms, centigrade and kilometres. Even so, it is well worth having a look at what can be done with this home made Amazon Echo. You can ask all sorts of questions and get answers as it hooks into various services. I’ve used it to give me the latest news from the BBC. I’ve even been able to get it to play audio from one of my Kindle books, even though it didn’t do a very good job of it.

Creating the home made Amazon Echo step-by-step

There’s not much point in me going over step-by-step what I did to create this home made Amazon Echo. Here is the link to the Raspberry Pi project. There are quite a lot of steps and you do have to learn how to do things like use the terminal. There are LINUX commands to put into the terminal which you don’t have to really understand. At least you don’t have to have a full understanding even when you hit some of the road blocks. I had to make some changes to a LINUX command to take into account a newer version of Java. You do need to know how to change directories using just the terminal. There were times I just found it easier to get to the directory I needed by doing a right click on the directory in the file manager. You can choose to open that specific folder in terminal.

Linux on home made Amazon Echo

Many of the LINUX commands involve telling the Raspberry Pi to download and install software. A couple of times I found that when running a command to get a service working it out to be done from within a specific folder or directory. Sometimes it was easy to get things done just by doing a copy and paste of the code for the command in the project details. There were a couple of occasions where I was scratching my head and thinking I might have to give up. What I did to get around this was to go back to previous steps in the project and to redo them. I think a couple of times I had done something correctly but the system has stopped working and I just need to go back and to restart it. The project can’t put in every piece of information to deal with all sorts of problems you could come up against. On one occasion I did resort to googling the command to find out where I was going wrong. Basically when it comes to it you do have to be quite tenacious and tell yourself that you’re not going to give up. At one point I decided to take a rest and grab a nice cup of tea (otherwise known as thinking juice) then go back in and check things again.

It pays to have a look at the sometimes quite verbose notifications in the terminal window of what’s being done. This is where you can see what’s going wrong. At least it will give you a clue here and there. This is how I found out that one of the services needed to run the home made Amazon Echo has stopped running. It was because of this that the following steps were not working as they should.

Limitations of a home made Amazon Echo

In order to get this home made Amazon Echo working you need to register for an Amazon developer account. It’s very easy to do this and you can see that Amazon expect people to do this sort of home made Amazon Echo project. Even though at times it seems like some devil-based hacking type of activity it has received the blessing from Amazon. The main limitation you’ll find when you’ve completed the device is that unlike the $179 Echo it is not allowed to listen continuously. So you’re not able to start the Echo working by saying the keyword Alexa. You have two press a button Within the software on the Raspberry Pi for it to start listening.I have also found that the listening can only last for a very short amount of time. For some of the questions I wanted to ask Alexa I didn’t have enough time. Something more complicated like getting it to make a change to an event I had used the home made Amazon Echo previously just didn’t work. The microphones in the Amazon Echo are going to be much better too. I connected my normal studio microphone which is good for near field listening. The real device can hear you when you are further away in the room.

The Amazon Echo versus Siri versus Google Home

home made Amazon EchoFor start off Google Home has only just been announced and is not available until later in the year. Only then will we be able to do a proper comparison with the Amazon Echo. Both of these devices have to be plugged in and so don’t have the advantage of the mobility you get with using an artificial intelligence available on your mobile device such as your iPhone or Android device. Google has its Google Now which makes an attempt at being your artificial intelligence assistant. Apple are coming up with something similar to what they have called Proactive. The Google one of these works the better of the two due to the lower privacy threshold than you have with Apple. Apple don’t want to mine your data to know what you’re doing and when you’re doing it. There are rumours that Apple will be coming out with a similar device to Google Now and Amazon Echo. I expect we will find out whether that is the case or not when WW DC takes place in June. It would be useful to have an electronic box from Apple that would contain everything for home kit and would also include all the stuff that the Apple TV can already do. I would love to have a device that could do all of that for controlling home and being my virtual electronic assistant. For the moment all I have is my home made Amazon Echo which is fun to experiment and play with. As I said at the beginning of this article it is early days yet with any of these sorts of artificial intelligence devices. This is the way the future is going and I’m looking forward to it.

Hey Siri

Posted in Good and Geeky.

Photos In a Good and Geeky Way

It was a good and geeky Sunday morning

When I have a visit from my mother she likes to go to the Sunday morning market in St. Feliu. I have absolutely no desire whatsoever to wander around the market looking at all that stuff. There’s not much stuff for us good and geeky types! So my wife will go with her around the market while I go to the harbour. On Sunday morning I took my Sony NEX6 mirrorless camera with me. I always have my iPhone with me for photographs anyway. I like to take pictures of the fishing boats, getting close up shots of rust and hydraulics. Most of the fishing boats were out so I only got a few shots of my preferred subject matter. So what I did was to sit in the car for a while and try to get images from the camera over to the iPad. I am only had partial success with this effort.

Downloading via the hotspot

I was having some difficulty connecting the iPad to the camera. So I did some checking on the camera have found that the software for collecting over Wi-Fi to mobile devices needed to be updated. So with a bit of good and geeky fiddling around I connected the camera to the iPhone Wi-Fi hotspot. I was then able to download the camera connection application update to the phone and I downloaded another app as well. I was looking for the camera software to show the iPad a QR code so I could use it to connect between the two devices. I didn’t need the QR code after all, because after updating the software on the camera connected straightaway to the iPad. The camera connection software on the camera connects with Play Mobile app from Sony on the iOS devices. My idea was to send some of the photos already taken during the morning trip around the harbour to the iPad to do some work in iColorama. I’m an iPad artist. Despite my good and geekiness I wasn’t able to get photos already taken from the camera to the iPad. I would need to use the iOS camera connection kit for that probably. However I was able to control the camera from the iPad which is quite useful. Shot taken directly like this do transfer directly across to your iOS device.

Controlling a Sony Mirrorless camera from the iPad or iPhone

The Play Mobile software from Sony lets you change settings and shoot pictures on your camera from your iOS device. You can change things like the aperture setting or the shutter speed setting. I quite like being able to use the touchscreen to set where I want the focus to be on the picture. There are other settings you can fiddle with also such as the ISO setting. There is a noticeable lag on the connection between the camera and the iOS device. If you’re working with the camera on a tripod then it doesn’t really matter. In any case, the most important thing is that when you press the shutter button it happens almost instantly and you’re not likely to miss the action in the scene. On Sunday I didn’t have my tripod with me so I only took a couple of shots. I was impressed that it worked as well as it did and I can see myself using it again. It would be good for candid street photography. You could put the camera in position with the desired background and take photos as people walked into the shot.

Posted in Good and Geeky.

Raspberry Pi Geekery

Good and geeky with the Raspberry Pi

On account of the fact that my iMac has been away getting fixed today I decided to connect the extra monitor to use with my Raspberry Pi. Previously, I had been using the Raspberry Pi connected to the television which is not quite so good and not very comfortable to use.

In the Raspberry Pi operating system I opened up terminal. Then I started putting in commands to download and install applications onto the Raspberry Pi. They were quite easy commands but you do have to make sure you get them exactly right for them to work. I added a VNC server so I would be able to connect to the Raspberry Pi from another computer even if I haven’t got any monitor connected to it. This is so I can use the Raspberry Pi completely headless for making a security video camera. The other possibility I am considering is to use the tiny computer to make a home-made version of the Amazon Echo.

Everything was going mostly okay except for the last bit I did today when something didn’t get properly installed. I was running short of time, so I’ll have to to have a look and see if I can try once more to get the job done right. With the home-made version of the Amazon Echo you connect a small microphone to it and a speaker. When you have an account with Amazon developer you can ask the Amazon Echo questions. It is a bit like using Siri on your iPhone, but it ties into the Amazon services. 

Pine 64

I have another tiny computer which will do something similar, a Pine64. Maybe I can get that to work in some sort of good and geeky project. It’s the sort of thing which us nerds love to do and increasingly the general public. It’s great fun to play with these technical devices and bend them to our will with code and will power. 

Posted in Good and Geeky.

The Good and Geeky iPad Artist

The Latest Book from Good and Geeky – iPad Artist

Good and Geeky iPad Artist

Buy the book today

It took me long time to create Good and Geeky iPad Artist because I was spending so much time using the art tools to create art. I have the Apple iPad Pro and the Apple Pencil and it is so easy to get lost in the fantastic applications available. I found I jumped into applications like Procreate and iColorama and two or three hours later I would still be there drawing and painting on my iPad. I am a good and geeky iPad artist. In the book I talk about a couple of different styli you can use. If you have one of the pro versions of the iPad then you really do have to get the Apple Pencil to be a proper iPad artist. One of the things that really makes a difference is being able to rest your hand on the screen while you are drawing. The only marks that arrive in the drawing are from the Apple Pencil.

Being the Good and Geeky iPad Artist

There is such a range of applications available for creating art on the iPad you’ll never be stuck creatively. Use the iPad as a sketch pad, sitting in front of your subject and draw onto the screen. There are huge advantages over using analogue means of drawing. The same combination of the iPad plus the stylus or the finger gives you an unlimited variety of pencils, pens, brushes, charcoal sticks, colours and paint. It’s so easy to change the width of your drawing line either by using the pressure sensitivity or by changing it in the settings. By using the Apple Pencil you’ll be able to unlock your creativity in the iPad art applications.

My Favourite iPad Artist Applications

Artrage logo

Pixelmator for iPad and Procreate are my favourite applications for drawing and painting. I also like Tayasui for its watercolour brushes. iColorama is a particular favourite of mine because it does just about everything. ArtRage is fantastic for its virtual paint which gives you a realistic view of painted brushstrokes. If I want to do any type of vector drawing then I will open up the application Graphic which is by Autodesk. The application Paper by 53 is a favourite with many people but it’s not something I have warmed to. Which of these applications I would use really depends upon what it is I’m trying to do. There will also be times when with a particular piece of art I will move between two or three applications. It’s easy to move work from one app to another. If you’re looking for a specific effect not available in the main application you’re working with, send it out to another iPad artist application. Bring it back into where you are working when you’ve got what you wanted.

Building it up with layers

It’s really helpful to build up a drawing starting with a rough layout of the main shapes. You can progressively build on top of that by using layers in whichever iPad artist app you are using. You gradually build on top of previous layers to get the drawing just right. The final layers are to add the details. Another great thing about layers is if you need to go back to a previous part of the background, you can do so without destroying the details on top. Fill in an area of colour or change a colour where necessary achieving effects quickly. By being an iPad artist the technology enables your creativity and doesn’t slow you down. You don’t have to spend time cleaning brushes, sharpening pencils or rubbing things out. It’s nice not to have any roadblocks in the way of making art.

Creative photography for the iPad artist

Not only do you have marvellous drawing and painting capabilities with your iPad, you also can work with photos creatively. The tools you have available are much more interesting and powerful than you find in some applications where all you get is a set of filters. With iColorama you can push pixels around to your hearts content. You can do the same sort of things with the distortion tools in Pixelmator for iPad. It’s even possible to remove objects from within a photo so you’d never be able to tell they were there in the first place. Another useful area of photographic art manipulation is with the blending modes for the layers. The best art applications for the iPad let you set the blend mode. This will change the way that two layers interact with each other in terms of colours, tone and light. If the top layer is set to multiply you get a completely different effect from when it is set at normal. In the book I show you how to create a caricature. There is a video showing you how I took a photo and turned it into a caricature image using iColorama and Pixelmator for iPad. There are other videos within the book also.

IMG 0811

Get the Good and Geeky iPad artist

There’s lots of pictures in the iPad Artist book showing you how to use the various applications. Examples of artwork I’ve created using my favourite iPad art applications. The idea is to whet your creative appetite and get you excited about being a good and geeky iPad artist. You may still get the urge to scratch some drawings onto paper with analogue tools now and then. After seeing what you can get from creating with the Apple Pencil on the iPad you’ll agree that going digital is the way forward.

 

THIS CONTENT IS LOCKED!

Get the iPad Art Free Book

Enter your name and email address below to unlock the content:

Here’s a Free Book to get you started with iPad Artistry

iPad Artist

 

Another Good and Geeky Book

Buy on Amazon

 

Posted in Good and Geeky.

Speech To Text Software on iOS and Mac

Good and Geeky way to write

Romantic view of writingWhen we have using speech to text software it baffles me why it is, when people are on about writing these days, they often are thinking about using pen and paper, or even worse using a pencil. It’s not the most efficient way to get the words out of your head and onto the page. In fact working that way is downright slow and old-fashioned. I’m still in that baffled state of mind when I see pictures showing writing and writers and they have the image of people sitting in front of a typewriter. So if the picture is to advertise something to do with writing, you’ll see a romanticised, dreamy, stylised view of people with pens or with old-fashioned typewriters. It’s all there in the marketing hype and the connection to the old-fashioned ways of doing things. Nostalgia rules! More and more we’ll see images which include computers and people typing on keyboards in the advertising realm of ours. I’m firmly of the opinion that these days, even that is a little bit old-fashioned. It’s certainly going to be a long time before we can just think and our thoughts will appear on the page. Mind you, that would be downright dangerous if there wasn’t any security involved in the process. Do we want people to know exactly what out thoughts are? It wouldn’t be very good if as you walk along the road and were thinking about things, people were doing a drive-by download of your thoughts.speech-to-text-software Despite everything, it is true to say there is a new more modern way to write. I do it just about every day of the week. It is the fastest way to get the thoughts and ideas out of my head and into a document. I write by talking to my computer. I use speech to text software which is remarkably accurate at the same time as allowing me to write three to four times faster than pecking away at the keys on a keyboard. This is the way forward and it is available now on our mobile devices as well as to use on our desktop computers.

An overview of speech to text software

For the last couple of years we have had available on our Macs built-in software which will take our speech and turn it into text. It’s very simple to get started. All you have to do is to set it up initially in the system preferences. Then two taps on the function key for the computer to start listening to you. It’s using the same Siri dictation engine as you find on the mobile phones and on your iPad. It’s fairly accurate and once you get used to it you can write quite fast indeed. It’s possible you’ll achieve an accuracy rate of around 95 percent with your speech to text software.

What this means in practice is that you will need to edit your converted text and find where mistakes have been created by the software. You will find there will be a word or two here and there that needs to be changed. The software can sometimes get completely confused and insert a word which has nothing to do with what it was you wanted to say. There will be other words such as homonyms which means that they sound the same as other words. So you will have said there when you really meant their. There may also be one or two places where perhaps you were not very clear in how you said the word and it wasn’t interpreted quite right. This could be as simple as a word that was supposed to be plural, but came in as a singular version of the word.

Dictation built into all Macs

speech to text software

speech to text software on your Mac

What is the best speech to text software to use? When you are using this basic Siri dictation, whether it’s on the Mac using the built-in dictation software or if it’s on the iOS device you will be dictating one or two sentences at a time. Depending on whether you are in the flow with regards your creativity of creating the words from your ideas you may want to either ignore any mistakes you see as they are made, or correct them as you go. I find if the sentence more or less makes sense it’s best to work out later what it was you meant. Catch more of the problems by doing the editing later. Edit the work in one foul swoop when you’re finished saying all of what it was you wanted to say. Whether you are writing using a keyboard for whether you are writing using dictation software you will certainly have to do some editing properly before you publish. So you don’t need to think the editing stage of dictation is time-consuming and something to slow you down. You can balance the fact the software will usually spell things more correctly than you will with a keyboard-based workflow. You will have fewer typos to have to fix and that will make the complete editing process much faster overall.

Fixing Small Problems With Speech To Text Software

speech to text software on your iPhone

How to use Siri dictation on the iPhone

When I am using Siri dictation on the iPhone, the way it works out is I find one or two words that are incorrect. It’s as simple as pie to select the word and with just a couple of taps on the keyboard put in the correct words. It is more productive than to try again dictating the word and expecting it to come out right the next time. One of the things you can do both on the Mac and also on your iOS device is to have the computer read back your text. When you do this on the Mac you just have to set it up in the system preferences for speech and text. You will see there is a keyboard shortcut of Option-Escape. Use this keyboard shortcut and the voice you have chosen will start speaking whatever you have selected. I use the British English voice called Kate and sometimes the voice that goes by the name of Daniel. You can also set the speed of playback of the voice. When I’m in editing mode I will usually read out aloud to myself, but getting the computer to read to you is another good option. It is also possible to do a similar thing on your iOS device. Once again you do have to set it up beforehand in the system preferences. It’s best to choose one of the enhanced voices as they sound more natural and less computer like. By listening to the words being spoken, either by yourself or by your computer voice, you’ll find any mistakes still in the text so you can correct them.Dictation & Speech

Dictating in iOS with Siri Speech to text Software

Get Dragon Dictate on iOS

Get Dragon Dictate on iOS

Speech to text software for iOS. To get started with Siri dictation on iOS on your iPhone or iPad you need to be using the standard Apple keyboard. To the left of the spacebar you’ll see a key which has a picture of a microphone on it. Tap once on this key and you’ll get between one and two minutes of dictation time. You’ll be able to speak two or three sentences, maybe even four before it stops and you have to go again. It’s a good moment to have a quick look at the speech to text results. It’s only a small amount of text and it’s easy to quickly change one or two words here or there if you need to, for it makes sense. If it is more or less correct you can also just ignore it and go with the next dictation session. Using this method you’ll quickly be able to crank out hundreds of words. Within 10 or 15 minutes you can easily have five or 600 words in your document. The microphone on the device, whether you are using the iPhone or the iPad is of a high quality and providing you are close enough to it you will get good word recognition. If you’re going to be moving around from place to place so you can walk and talk at the same time, consider using an external microphone. The microphone included on the headset of the iPhone which comes as standard when you buy it, will give you good quality speech to text conversion. By having the earpiece plugged into your ear the microphone is going to be in just the right place for your speaking. It was after all designed that way so you would have good communications when speaking on the phone using the earbuds and the microphone. Another possibility you might want to consider is to use a clip-on microphone. (lavalier mic) The beauty of this is the microphone is held in just the right place in terms of distance from your vocals to ensure good quality recording. I use a microphone made by Giant Squid and it only cost about $40. I did have to buy a converter to allow me to plug into the 3.5 mm socket on the phone. There are other external microphones you can buy and some of them will plug into the lightning connector.

Dictating in iOS using an applicationIMG_3443

There is speech to text software made by Nuance Communications called Dragon Dictation. It works in a similar way to Siri dictation. The difference is that you press a button on the screen to start the dictation and you don’t see any feedback until you have finished dictating. Ever since iOS 8 or 9 the Siri dictation throws the words onto the screen as you’re dictating. With this DragonDictate app it seems you can dictate for quite a long time. More than the two or three sentences you get from Siri. When the Dragon Dictation application has finished putting the text on the screen you can easily select single words and you’re just given the option to delete. When you’ve deleted the word you can bring up the keyboard on the screen and insert the correct word to replace the one you’ve just deleted. I suspect this application is going to be more accurate than Siri dictation even though it’s not so different. Which one you prefer to use is going to depend upon how you like to work. The advantage of using the inbuilt Siri dictation is the fact it is always there accessible by a key on your standard keyboard. It is nice to see the words arriving on screen as you are dictating. Even if you are limited to the amount of time you can dictate for. The other advantage is that you can dictate into whichever application you want to work on. You might be working in Ulysses on your iPad or perhaps Byword on either the iPad or iPhone. If you go down the route of using Dragon Dictation on iOS then you’ll need to copy the text you dictated and paste it wherever you really wanted it to be. The Siri dictation is the one I use most often. Seems I have some sort of preference there

Record into Audio recording apps

Recording audio for transcription

Recording audio for transcription

There is the other option of recording the audio into a general audio recorder software such as Twisted Wave, Just Record or one of the many other voice recorders available. The other option would be to use the Dragon Recorder – Nuance Communications application. Mostly the difference between these applications is going to be how you’re going to transfer the file created from the iOS device to where you’re going to have the voice transcribed. You will need to have the full DragonDictate speech to text software on your computer. I use DragonDictate for Mac 5. The method for moving the file using the Dragon audio recording software for iOS is by using the method of linking up to a web browser on your desktop computer. It’s easy to do even if it is not particularly elegant. It’s for this reason I prefer to use one of the other audio recording softwares. With Twisted Wave I get the option to set the input gain manually so it’s best suited for the microphone being used. This way I can be sure I’m not recording at too high a level and having clipping destroy the recorded audio. At the same time you don’t want to have the volume of the recorded audio too low. It’s because of these things I generally tend to use Twisted Wave. There are occasions I might use Just Record. My reason to change to this would be it is quicker to get into the recording mode. It is a much simpler application and all you get is just one button to press when you want to start a recording. The quality of the recording seems to be good enough despite the simplicity of the application. Either of these two applications allow me to share the completed audio file to a application so I can transfer it to my Mac. My preferred method is to use an application called Bit Torrent Sync. The good thing about this application is that I don’t need to connect my iPhone or iPad to the Mac with cables in order to move the file. It’s not necessary for the file to go out across the Internet and then come back in again either. I have the sister application of BT Sync available on the Mac and the file is transferred directly from the iPhone to my Mac. It is very fast to transfer files this way, I also use it with pictures. As soon as I have the file in the folder on my Mac I can change the mode in the speech to text software from dictation mode to transcription mode. I just drag and drop the file onto the DragonDictate for Mac application. The transcription starts working straight away and works faster than real-time. So if I have an audio file which is 15 minutes long, it might only take 10 minutes for the DragonDictate to convert the speech into text. It must be able to do some magic in ignoring areas of the recording where there is no voice to transcribe.Transcription workfow using Dragon Dictate

Listen to the Audio and read the transcript

When the transcription has been completed by DragonDictate speech to text software on the Mac the results pop up in a Textedit window. You could simply read through it and edit it in the same way as you would edit any other dictated or typed in words. Another possibility is to first of all play the audio from your file as you read through the transcribed text. You’ll be able to listen to what it was you actually said and compare directly with the text on the screen in Textedit. You’ll probably still want to do an edit, where you do a read through of your text out loud. This is when you will be organising your words and making the piece flow better for when it’s read by somebody else. It’s not going to be perfect directly as it comes from your mouth unless it’s a short document. Editing has to be done.

Dictation on the Mac

Dragon Dictate Status Window

Dragon Dictate Status Window

As already mentioned, it’s possible to use Siri dictation on the Mac. This is a good place to start if you want to experiment with the possibilities of speech to text conversion. It’s not the same as using dedicated software like DragonDictate. With DragonDictate you get more tools to use and a better vocabulary to work from. DragonDictate also learns from your speech patterns and is much more accurate overall. It’s necessary to train your Dragon when you first start using the software. One way to do this is to read the texts provided in DragonDictate. However, there is a school of thought which says you should instead train your Dragon by reading your own writings into the software. This way you’re getting the dictation software to learn the words you use personally, as well as how to recognise your way of pronouncing things. The pieces of text you can use from within the software are only fairly short. You do get the option to make corrections as you are training your Dragon. This could be enough to get you going and the Dragon can do the rest of the training on the job as it were.

Correct Text as You Go

Another thing DragonDictate can do is to let you correct text as you go. Tell the dictation software to select a word or set of words and then say the words again or change the words to what you really wanted to say. With Siri dictation you can use the punctuation commands while you are dictating, but with DragonDictate you get a wider range of commands. The DragonDictate software is fairly expensive and if you don’t do much writing it’s probably not worth you getting it. If you write every day or if you have problems with repetitive strain injuries then DragonDictate is well worth the money. For start off, you’ll find you can write about three or four times faster than if you’re using the keyboard and typing with your fingers. Another good point in favour of the DragonDictate software is using it to control other applications on your Mac. I quite often use DragonDictate within the application Messages. I will use DragonDictate to first of all open the Messages application. I will dictate the message I want to send and finish off by using the command ‘Send Message’. I also use DragonDictate extensively with Day One. I use it to input entries into my journal every day of the week. I have also set this up to have commands specific to the Day One application. I use my voice to command Day One to start a new journal entry and I can also tell it to open up the tagging window. It does make things much faster for creating journal entries. Dictation is a fantastic way to record my life in the best digital journal app. You can also use these specialised DragonDictate commands in applications such as Mail or even in Word. If you are a writer you don’t want to be using Microsoft Word for speech to text software. You should be using Ulysses or Scrivener. Word is for office workers and not for us creative types. Whatever word processing applications you use with speech to text software you will be amazed at how productive you can get.

Correcting text in Dragon Dictate

Correcting text in Dragon Dictate

Read the rest of the post in the members section

Get the Good and Geeky Dictation Book for FREE and read about it in your eReader – Find out how many words an hour I can write with my Dragon.

THIS CONTENT IS LOCKED!

Get the eBook - Good and Geeky Dictation

The Fastest Way To Write

Enter your name and email address below to unlock the content:

Recommendations for Using Dictation

Training your DragonDictate

Training your Dragon

Until you get used to using speech to text software for dictation it will seem quite strange. It isn’t really something you can do in office for with other people either. Dictation is great when you are working by yourself in a quiet place. Using dictation is also marvellous if you are able to dictate as you are walking. I will sometimes do dictation while I’m out walking the dog. If you want to have a more relaxed seating position and workspace then why not relax on the sofa and dictate into your iPhone or iPad. If you use a laptop computer you can also use the full-blown, best in class DragonDictate while sitting with your feet up in an armchair. You don’t have to worry about getting into a specific seated position where you can attack a keyboard with your fingers. Dictation is a must if you suffer from repetitive strain injury. Being able to say goodbye to the pain is a great motivator for taking up writing by dictation.

Take the time to get used to a new way of working

Remember it does take time to get used to new ways of working. You’ll need to get used to speaking clearly and using punctuation as you say your sentences. If you can get past the early stages then I feel sure you’re going to be a happy writer creating many thousands of words. It doesn’t have to take more than a couple of days or at most a week or so before you are an expert at using DragonDictate. There are versions for Windows and also for the Mac and lately Nuance Communications have come up with Dragon Anywhere. Dragon Anywhere is a subscription service which is supposed to give you the same level of quality of dictation as you would get with the desktop software. I haven’t been able to try that yet as at present it’s only available in North America. So I urge you to go forth and get all futuristic with your writing and take up dictation. Try it out on your iOS devices and also compare the built-in dictation on the Mac. If you’re serious though, you will go for the industry-standard product of Dragon dictate.

Statistics for the Dictation

How to use DragonDictate

These are the statistics for writing this document using DragonDictate

5KWPH

Aim for 5k words per hour

This document has been dictated using speech to text software into Ulysses text editor on my Mac. I did it in word sprints, the first one for just 10 minutes, followed by four separate 15 minute sprints. Then the last section starting from Recommendations For Using Dictation I added without setting a timer. Probably just another 10 minutes of writing. My average words per hour is nearly 2600 words. The screenshot is from the app 5KWPH by Chris Fox

 

 

Book visual - Dictation

Right Click on the book to download the ePub. (6mb download) – Or click here for the Kindle Version

Posted in Writing.
%d bloggers like this: