Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming

Will Programming by Voice Be the Next Frontier in Software Development? (ieee.org) 119

Two software engineers with injuries or chronic pain conditions have both started voice-coding platforms, reports IEEE Spectrum. "Programmers utter commands to manipulate code and create custom commands that cater to and automate their workflows." The voice-coding app Serenade, for instance, has a speech-to-text engine developed specifically for code, unlike Google's speech-to-text API, which is designed for conversational speech. Once a software engineer speaks the code, Serenade's engine feeds that into its natural-language processing layer, whose machine-learning models are trained to identify and translate common programming constructs to syntactically valid code...

Talon has several components to it: speech recognition, eye tracking, and noise recognition. Talon's speech-recognition engine is based on Facebook's Wav2letter automatic speech-recognition system, which [founder Ryan] Hileman extended to accommodate commands for voice coding. Meanwhile, Talon's eye tracking and noise-recognition capabilities simulate navigating with a mouse, moving a cursor around the screen based on eye movements and making clicks based on mouth pops. "That sound is easy to make. It's low effort and takes low latency to recognize, so it's a much faster, nonverbal way of clicking the mouse that doesn't cause vocal strain," Hileman says...

Open-source voice-coding platforms such as Aenea and Caster are free, but both rely on the Dragon speech-recognition engine, which users will have to purchase themselves. That said, Caster offers support for Kaldi, an open-source speech-recognition tool kit, and Windows Speech Recognition, which comes preinstalled in Windows.

This discussion has been archived. No new comments can be posted.

Will Programming by Voice Be the Next Frontier in Software Development?

Comments Filter:
  • Siri: code for me. (Score:4, Insightful)

    by Ostracus ( 1354233 ) on Sunday March 28, 2021 @09:39AM (#61208430) Journal

    No more a "frontier" than the current voice assistents.

    • Why? (Score:5, Insightful)

      by Immerman ( 2627577 ) on Sunday March 28, 2021 @10:52AM (#61208730)

      Not even a voice assistant - just speech to text. And I can't imagine why you'd want such a thing. How many programmers can talk faster than they type? Especially for things like parenthesis where precise placement is essential. "close parenthesis" is a mouthful. Though I suppose you could use concise keywords or something like tongue-clicks to quickly indicate "close the innermost brace-pair, whatever that might be"

      Still, the single biggest productivity-boosting feature of modern IDEs for me is context-sensitive autocomplete, and I don't see any way that could work by voice. I no longer have to remember/lookup exactly what the variable or function name I'm looking for is, just how the name would start. I even started choosing names for their autocomplete-effectiveness - all closely-related functions should start the same way: e.g. File_open rather than Open_file, so that it's right next to File_close, File_seek, etc. Sure it reads a little Yoda-esque sometimes, but doesn't actually hurt clarity, and makes it so much easier to find the function I'm looking for. You want me to remember exactly what very clear and explicit 4-6 word function name I used so that speech recognition can find it? No thanks.

      I suppose it's nice for those who have hand or other issues that keep them from typing comfortably - but I sure as hell don't want to have to work anywhere near the cubicle of the person programming by voice.

      • Still, the single biggest productivity-boosting feature of modern IDEs for me is context-sensitive autocomplete, and I don't see any way that could work by voice.

        Presumably voice-based programming would not be at this level.

        Having said that...

        'When someone says “I want a programming language in which I need only say what I wish done,” give him a lollipop.' -- Alan Perlis

        As for this...

        I even started choosing names for their autocomplete-effectiveness - all closely-related functions should start the same way: e.g. File_open rather than Open_file, so that it's right next to File_close, File_seek, etc. Sure it reads a little Yoda-esque sometimes, but doesn't actually hurt clarity, and makes it so much easier to find the function I'm looking for.

        Sounds like you need actually working autocomplete [youtube.com], then?

        • >Presumably voice-based programming would not be at this level.
          From TFS:
          >has a speech-to-text engine developed specifically for code...
          Sounds like *exactly* that level.
          Though I'll admit the part about voice-activated shortcuts, etc. does sound like it could be handy. I've used Voice Commander, VoiceBot, etc. over the years as handy tools for games and programs with lots of commonly-used shortcuts that can be hard to remember - but that takes a lot of setup so I never actually set it up for that many

      • Especially for things like parenthesis where precise placement is essential. "close parenthesis" is a mouthful.

        You are missing the point. The editor is syntax-aware, so you would never say "close parenthesis". You would just say "func foobar return int" and the editor would give you:

        int foobar(^ ) { }

        with the cursor at the ^. Then you give the args. When you say "done", the cursor jumps past the first brace.

        Syntax awareness, predictive suggestions, and auto-complete allow short expressions to quickly generate code without any need to sound out individual characters.

        • For boilerplate expansion and navigation macros like that - sure, sounds great. I've often set up similar hotkey-based macros for all the common language structures, and voice commands make that even easier. But that's not programming by voice, that's just triggering the occasional boilerplate shortcut.

          If you're doing actual programming, then if I say "x times y + z", it matters a great deal whether that's interpreted as (x*y)+z or x*(y+z).

          • Just to nitpick:

            (x*y)+z is the default in most languages. Would by coincidence even work in Smalltalk.

            That is easy: x*(y+z)

            " X times bracers Y plus Z done "

            Seriously.

            How the funk would you over the phone say this one?

            for (int i = 0; i < 10; i++) {
            print i;
            }

            " for loop with 'I' from 0 to less than 10 with increment one *body* print 'i' done "

            It is not so hard. Or would you tell to a human who can program every single char or would you assume he know what a for loop is?

            • Sure, I know all about order of operations. My point was simple to be illustrative.

              Over the phone? "For i from 0 to 9, print i"

              Oh, and just FYI, not sure about the details of your language, but in C++ using ++i is almost always preferable to i++. there's a bunch of extra overhead for i++ since it has to store the original value in a temporary variable for subsequent use. You'd hope the compiler would optimize away the unused temporary, but that's not always the case. And when you get into object-based

              • Interesting point, I had assumed for int's etc. the compiler takes care of that.

                • For ints, it should. Compilers feeble enough to fail at such simple optimizations are no longer common.

                  However, there's a style/convention aspect to it as well - your code will look cleaner and raise fewer questions if you always use the same increment syntax, so it makes sense to use the more efficient syntax for everything. It also keeps you from accidentally needlessly invoking the i++ overhead in places where it can't be optimized away. (It also maps better to a same-order English reading: ++i = "inc

                  • Print (++i, ++i, ++i)
                    This is no linger undefined, at least not in C++ - not sure about C.
                    They invented the term "sequencing" or "sequencing point", something like this.
                    So every coma and semicolon is a sequencing point and evaluation is from left to right.

                    • Nope. Even in C++17 the evaluation order of function parameters was still unspecified, not sure about the newer ones. What did change with that version was that the function name must be evaluated before the parameters. e.g. if GetFunctionPointer(x) returns a pointer to Foo( int x, int y) , then
                      GetFunctionPointer() (Get1(), Get2())
                      *must* call GetFunctionPointer() first, but the order in which Get1() and Get2() are called is still unspecified. Prior to C++17 it was also unspecified when GetFunctionPointe

                    • >The one exception is logical operators, which are safe thanks to short-circuiting guarantees. e.g.

                      And actually that's only true for built-in types. If a user type overloads the logical operators then short-circuiting is ignored, and the usual function-call evaluation order prevails, with both operands being evaluated before the overloaded operator function is called.

          • If you're doing actual programming, then if I say "x times y + z", it matters a great deal whether that's interpreted as (x*y)+z or x*(y+z).

            If you say "x times y plus z", you will get "(x*y) + z" because that is the standard precedence.

            If you want "x * (y+z)" you can say "x times open y plus z close". Or you can use RPN: "x y z plus times" and the editor can auto-convert it to infix.

            Reverse Polish Notation [wikipedia.org] can seem very natural once you get used to it.

            For people with RSI and other disabilities, it is not necessary to eliminate all typing. Cutting out 80% of it is still a big benefit.

            • And how would you distinguish that from x*open(y+z)? Open is hardly a safe name. Braces { }, brackets [ ], and chevrons are already taken. Parens maybe - that's not too much of a tongue twister. Of course done() and close() aren't exactly safe names either, so you'd need to come up with something there, though you could at least share the same thing between all four, assuming your speech recognition was sufficiently context aware.

              The point I was trying to make was that that can get quite verbose comp

              • And how would you distinguish that from x*open(y+z)?

                The editor is smart enough to know that "open(x + y)" is not valid because the first argument to open() is not a number.

                But left-parens are way more common than calling open(), so you optimize for the common case and disambiguate the less common case by saying "call open" or something.

                Anyway, this is a silly discussion. You are pointing out corner cases where voice-to-code is not a perfect solution. I don't disagree that those cases exist.

                For people who can't type, voice-to-code offers them a way to do pro

      • Still, the single biggest productivity-boosting feature of modern IDEs for me is context-sensitive autocomplete, and I don't see any way that could work by voice. I no longer have to remember/lookup exactly what the variable or function name I'm looking for is, just how the name would start.

        Sadly you lose those benefits, and others, when you use a modern language like Javascript, Python, or Ruby.

        • Why? Are there no half-decent 90's-class IDEs out there for any of them?

          • In those languages, you can't know what type a variable is until runtime. You can't even know what variables/functions are in scope.

            • Sure, many languages use "everything is a variant/auto type", which confounds naive autocomplete when accessing object members. Though it should generally be fairly trivial to figure out types for at least most local variables are - if you say x=new MyObject(), then you know x is a MyObject from that point until it's set to be something else.

              Your scope comments make no sense though - all variables declared above the current line within the current code-block and its parent code block(s) are within scope.

            • Nevertheless auto complete works quite fine for those languages. Most of the time the IDE knows the type exactly.

              • What editor do you use? Personally I'm more interested in the "find all instances/change name" functionality.

                • Mostly Eclipse, otherwise the ones from Jebrain, IntelliJ for Java and related languages (Kotlin, Scala, Groovy etc.), AndroidStudio (based on IntellyJ) for Dart/Java, PHPStorm or PythonStorm (I think that is the name) also both based on IntelliJ.

                  All those modern IDEs have special API (has a special name, don't recall it atm) which interconnects the text editors with the compiler(s). The compilers basically hold the whole program/codebase as collection of ASTs in a graph database. Most people program inside

                  • Most people program inside of a running program anyway, while they code in debug mode.

                    What lol

                    • Yes,
                      with C# and Java, that is no problem at all.
                      And during the mid 90s it worked well with early "hot code replacement" C++ as well.
                      Now as many of C++ is going via clang, I don't know if a modern C++ IDE can do that.

                      Some IDEs allow backward executing of code.

                      You have a breakpoint where you think the problem might be. Your IDE hits the breakpoint. You find what you see implausible. You step backward. "Unexecute" the code. Inspecting the variables involved.

                      Some IDEs can only do that in the current stack frame

      • My first thought. But reading what it's about is about those who cannot type fast, or cannot type. As opposed to the "programming is so simple and needs to be accessible to those who want to program while driving" sort of thing (program by drawing pictures, program by dragging blobs around on screen, program without the tedium of syntax, etc).

      • Well,
        I would assume if you say "File" the text is "typed" into your IDE and the autocomplete is just like it was typing. Or do I miss something?

      • by vix86 ( 592763 )

        You wouldn't speak it the same way most of us read it, you'd build a different vocabulary around the language you want to work in.

        Here [youtu.be] is a talk Tavis Rudd gave about using Python to do this. You'll see that he uses a very minimized lexicon to code. You would probably end up using a different style IDE built for the task though. Our current IDEs are built for typing in stuff, which doesn't mesh with TTS. Exactly what changes you would make in this, I don't know, but there is probably a lot you could do.

        As f

  • by Chelloveck ( 14643 ) on Sunday March 28, 2021 @09:51AM (#61208478)
    It's still too early to think about doing this. April 1st is four days away! Save the joke articles until then. And, try to make them more believable.
  • Betteridge says no (Score:5, Insightful)

    by Mononymous ( 6156676 ) on Sunday March 28, 2021 @09:53AM (#61208490)

    These advancements are great news for people with disabilities. But just like we aren't all going around in wheelchairs, voice recognition will never take the place of programming with a keyboard for most people.

  • by Misagon ( 1135 ) on Sunday March 28, 2021 @09:57AM (#61208496)

    See title ... ;)

    Anyway. I think programmers should consider using ergonomic keyboards and mice before they get repetitive strain injury or carpal tunnel syndrome.
    Myself I started using semi-vertical mice a decade ago, in which I also had removed the scroll wheel (for the pain in my scrolling-finger). I also transitioned to using mechanical keyboards with light switches (which is also a property of good ergonomic keyboards, just like tenting/gable angling and hand separation/opening angle).

    I have tried eye-tracking for mouse movements, and I did not like it. It felt unnatural. By using eye-tracking, that puts a constraint on where I could look. If I forget for a second to be conscious of where I put my eyes and let my eyes look freely, it is easy to make an error -- but that is only natural.
    My eyes are part of my senses, not part of my limbs. They are supposed to look everywhere, and provide our situational awareness.

    • by dvice ( 6309704 )

      I use a normal mouse (cheapest you can find) and I have no problems. The reason is that I keep my hand extremely relaxed on the mouse and on the table. I could sleep in that position and my hand would not move, that is how relaxed it is. I use the same style even when playing 3rd person shooter games. I have spend about half of my life on computer so perhaps I have brown to use mouse as a part of my body and my brain has learned to minimize effort to use it.

    • Even a more common alternative interface that many people like, mouse-gestures, can cause problems. When I had a laptop with that enabled by default a few years ago, I gestured by accident a few times. I ended up disabling that whole thing after just a couple days as part of breaking in the new machine.

    • "I think programmers should consider using ergonomic keyboards and mice before they get repetitive strain injury or carpal tunnel syndrome."
      I think programmers should do some sports.
      Martial arts come to mind ...

      So you use your hands, spine, head and legs for other "not so repetitive" abilities. And stay healthy that way.

      • by ghoul ( 157158 )
        Time on earth is limited. Might has well ask Usain Bolt to spend time learning to cook.
        • Well,
          I have in total 7 black belts ... as I practice several martial arts since far over 35 years.
          I just program a little bit longer :P

          There is plenty of time.
          And in fact: learning to cook and actually doing it, is also a "meditating" state of mint and relaxing activity.
          Have the TV off of course and rather listen to music or chat with your significant other.

  • Keyboards and pointing devices are annoying peripherals that contribute to repetitive stress injuries..
    • Graphical programming would take off sooner than thought programming. Like Unreal's Blueprints.

      • Graphical programming would take off sooner than thought programming

        Yes, this. Just wait til you see the 3D immersive MMORPG I'm writing in Scratch.

      • I would argue that the biggest problem with modern programming languages (we've solved the syntax problem, we no longer have semantically significant columns, for example) is that they don't allow you to see the organization of the code you are working on. A 3D graphical programming GUI would definitely help with this. IDA Pro also has a helpful interface like this, although it's 2D.

        Ideally, you should easily be able to see variable scope (everywhere a variable/object is used) and the calling tree (everywhe

        • Well, Smalltalk was very good at that sort of thing. It is annoying for some, like me, who like to see all the code instead of continually pointing and clicking, but it had great tools for code browsing and seeing the big picture and all that. I loved it, and I hate IDEs. Similarly, we have yet to have a command line as useful as on Lisp machines. Systems where productivity and ease of use for a professional took priority over ease of use for novices.

        • I'm not sure about the 3D gui - we've only got 2D eyes. 3D can be handy for overviews, but we actually have a really hard time making sense of the details of 3D-organized information. Even 2D organization has issues, e.g. an if-else block or switch() statement would logically have the alternate code paths displayed side by side rather than sequentially, but that runs into problems with horizontal screen space.

          Variable scope can be handy - I've used a few IDEs that automatically highlight every instance of

          • 3D can be handy for overviews, but we actually have a really hard time making sense of the details of 3D-organized information

            Yeah, a 3D overview that zooms into a 2D (or normal text) closeup for regular editing. Smoothly transitioning between both, so you can see overall organization and closeup design nearly simultaneously.

            . I'm not sure how doing the same for functions would work, you'd have to be looking at some sort of ultra-condensed overview of the entire project. Even a "find all uses" search can be a little overwhelming in IDEs that support it, as hundreds of calls can be scattered across half the project.

            Yeah, that's likely an indication that you need some refactoring, or at least re-organize the common communication pathways in your code. That is kind of the goal: to make it easy to see the organization, and help you make the organization understandable. (Of course, some functions like print() would be used

            • If you're using functions as flow control between major code blocks that makes sense. The same holds for far too many object-methods. I was thinking more general utility functions, with something like sqrt() at one extreme. The ideal function is one that's useful in many different contexts without caring what those contexts are.

              At the opposite extreme, if a function is only ever called from a single place, there's very limited benefit in writing it as a separate function at all, all you're gainin

              • If you're using functions as flow control between major code blocks that makes sense.

                Whatever mechanism you use for flow control, you need to be able to visualize it (at least in your head), otherwise the code can't be understood.

              • by mark-t ( 151149 )

                if a function is only ever called from a single place, there's very limited benefit in writing it as a separate function at all, all you're gaining is some partitioning. Which shouldn't be underestimated...

                Make up your mind. :)

                But seriously, writing a function that may only ever called from one place helps keep your functions small, and that in turn makes the functions easier to understand as discrete concepts. Optimization steps performed by the compiler can handle inlining as necessary, so nothing is

                • >Make up your mind. :)

                  This is Slashdot, so I'll translate to a car analogy :-)

                  If a car is only ever used as a dry place to sleep, there's very limited benefit in getting something that's a car at all, all you're gaining is some weather proofing. Which shouldn't be underestimated, but is only a small corner of what cars have to offer.

                  As an example alternative in C which I sometimes use, code blocks. Particularly handy when you've got a lot of shared data the various functions need to work on. (using ' t

                  • by mark-t ( 151149 )

                    The premise behind short functions is readability. You should not have to "drill down" into other functions that are called by a current short function you are reading if those functions are well named. The names should reflect what those functions do, nothing more, and nothing less.

                    A function should ideally do only one thing. If that thing is itself composed of many sequential steps. such as your 'Foo example', then it is practical to separate those steps into their own well-named functions and just

                    • >You should not have to "drill down" into other functions that are called by a current short function you are reading if those functions are well named.
                      Of course you will - you wouldn't be reading the function in the first place unless you wanted to understand exactly what it's doing, and probably change something. Neither of which can be achieved without drilling down to the code that actually does the work.

                      Meanwhile, if you rip out all the code blocks as separate functions (and yes, they make natural

                    • by mark-t ( 151149 )

                      Comments can lie.

                      Code cannot.

                      Readable code is far more reliable than readable coimments.

                      Comments shouldn't. But they can. Humans make mistakes, and those mistakes can be reflected by comments. Therefore, a new developer to a codebase always needs to read the code anyways to be sure that they understand it and that legacy comments which do not reflect the current state of the code have not been left in.

                      This is easiest to do when clear naming conventions are used, and individual functions are mad

                    • >Code cannot.
                      Granted. But that's code. Function names most *definitely* can.

                      Definitely *shouldn't*, but definitely can.

                    • by mark-t ( 151149 )

                      Fair point... but between having to come up with readable names and maintaining comments, I'd rather come up with good names for things in the first place, eliminating the need for what would be redundant comments.

                      I'd suggest that it might be slightly easier for a comment to lie than a function name too, because every programmer looks at the names of the functions for the lifetime that the software is being maintained. a while, nobody pays attention to the comments anymore because they already know what

    • by Joviex ( 976416 )

      Keyboards and pointing devices are annoying peripherals that contribute to repetitive stress injuries..

      For some people who cant be assed to work proper. Been banging on keys and pushing a mouse for 30 professional years as a SWE with 42 total. Zero repetitive stress anything.

  • by DrYak ( 748999 ) on Sunday March 28, 2021 @09:59AM (#61208520) Homepage

    Unless I have to put up with one of the horrendous stuff that (cr)Apple has decided to impose to their users as "(pretend) input interface", I am way faster typing than speaking, specially when coding in languages that can use string of punctuation symbols to represent logic in a densely packed manner.

    So unless "programming by voice" means a "constant flux of face-paced short noise densely representing logic" - not merely a few mouth-pop to simulate clicks like TFS suggests, but the ability to, e.g.: verbalize regexs as densely as typing them - I don't see how this isn't:
    - at worse a completely asinine idea
    - at best an extremely slow alternative that could serve as a roundabout alternative to help people with disabilities and for the few corner cases where being hands-free is more critical than speed of coding.

  • by DrXym ( 126579 )
    A headline with a question to which the answer is obviously no. It would be a fantastically clumsy way to program.
  • In a general way? (Score:4, Interesting)

    by cascadingstylesheet ( 140919 ) on Sunday March 28, 2021 @10:09AM (#61208554) Journal

    In a general way? No.

    Tools to help those with disabilities are great. Kudos, and I'm all for them. But for those who don't need them, no; precise symbolic programming languages are not going to become easier to do by voice than with the more usual tools.

  • by groobly ( 6155920 ) on Sunday March 28, 2021 @10:10AM (#61208560)

    The Universal Answer to all headlines that ask a question:

    "No."

    (You may keep this for use with future headlines.)

  • by Registered Coward v2 ( 447531 ) on Sunday March 28, 2021 @10:19AM (#61208594)
    I worked with some doctors who used Dragon's medical software to enter patient notes. It worked well after they trained it to recognize their unique speech patterns. Even so, they had to proof the notes to be sure they were accurate. To me, proofing is the issue; based on the differences between writing and coding. A special dictionary can catch errors for the use case, such as medical terms; as probably one could for coding. While read and reed are context specific and easily identified with a decent grammar checker, or even by simply using the most commonly used spelling based on a user's writing habits, what happens if have assigned variable names of C, Sea and Si? The proofing, however, could be more challenge. Case notes are still written text you can read, programming not so much. In written text a minor typo does not make it unreadable, where as a minor typo could yield unpredictable and perhaps hard to discover errors. I'd wager coders would spend as much time proofing and debugging text as they saved using voice recognition.
    • Yes, I worked with someone who was integrating commercial voice recognition software with a medical device. It made a lot of sense, given that it means less touching of things, and it matches the way many doctors work with a recording device that later gets transcribed. Sure seems old fashioned, but at the same time it can be used while in surgery, or stepping out briefly to consult an xray or other image.

      On the other hand, I remember a talk at a cardiology conference specifically about how doctors need t

      • The last time that I saw my doctor before he retired, the practice group had put a keyboard/display on an articulated arm, into each exam room, running something from Epic. It made me sad and frustrated to see how both the apparent flow of his work, and the flow of our interaction, were disrupted by comparison to clipboard, chart, and pen. We are working with 6 minute scheduling slots, 5 minutes with the patient.
    • I'm not sure they'd even save much time using voice recognition - how much faster can you talk than you type? Especially if you need to speak all sorts of punctuation like parenthesis? Though if recognition were well-integrated enough there might be some interesting potential for speaking names while typing punctuation.

    • by ghoul ( 157158 )
      Generally unless you are trying to be clever and extract a little bit more efficiency (and with todays computers and compilers thats really a waste of programmer time), 99% of programming is deciding the logic. Typing in the code is 1% of the time. So really you wont be speaking out every line of code. You will be pulling together predefined libraries and functions. Not much scope of mistyping when you say something like -" Run a for loop from 1 to less than 100. Inside that for the array of individual call
  • and slower than typing with your fingers.

    But: eye tracking to do things like move focus among multiple windows or displays could be something useful for many people's workflows.

  • speech (Score:4, Insightful)

    by awwshit ( 6214476 ) on Sunday March 28, 2021 @10:30AM (#61208650)

    pound include space less-than ess tee dee aye oh dot aitch greater than return return

    void

    Boy I can't wait.

    • by PPH ( 736903 )

      Prior art [youtube.com].

    • The speech interface should've let you say: "include standard I/O". it would generate the right code because it knows the context is C/C++ programming language . It would also put that expression at the top of your file in the section where you put all your other includes without requiring you to navigate there because it would have some understanding of how people code and organize code files.

      Speech recognition is not a replacement for your keyboard. The good programming by voice environment would let y

  • by gweihir ( 88907 )

    A) It is a stupid idea
    B) It does not make coding easier

  • by AlanObject ( 3603453 ) on Sunday March 28, 2021 @10:59AM (#61208750)

    I have been developing software now for a half century. I cannot recall a year when one or more of the cherished shibboleths of the celebrated "futurists" didn't make it into technical media.

    Today's edition: "We don't need programmers anymore! Just say what you want and the magic software will do it!" A Majel Barrett in every house every office, in other words.

    Even while I was punching out FORTRAN IV programs on Hollerith cards, there would be some assistant professor in the room next door writing out yet another article about how "very soon" now that we won't need software programmers anymore and the computers would be programming themselves.

    This is a variant of that. If you have a hard time with mouse/keyboard I can understand why voice recognition would be attractive. But that doesn't fit will into the thought processes of what I bet is the majority of software creators and architects. Even a simple web application (an SPA such as Angular which I use) will have hundreds if not thousands of potential symbols, structures, and contract points (i.e. function calls) that need to be kept in mind, picked out and utilized at any given moment.

    Stephen Hawking could do it in his field. He could dictate equations and transformations to his assistant and then go back and order a change on page 37 when they were now on page 150. He had to develop that skill from being immobile. I doubt 99% of the coding world could ever do it.

  • If you're doing enough low-thought text output that typing speed is your bottleneck, that is often a flag that you are not programming effectively. You might need to be making better use of functions, or other program structure and/or making better use of editor features for autocomplete and what not.

    Naturally if you have difficulties typing, this balance changes and these kinds of products obviously make sense. But otherwise you should be able to "type faster than you think" for most programming scenario

    • I suspect most of the problems in software comes because developers can type faster than they can think.
    • Well, I type pretty fast.

      But your idea you can type faster than thinking nonsense: who would put down the letters? If not me? Where do they come from? From my brain. Obviously I can formulate a sentence - and programming blocks are just "sentences" in my mind - 100 times faster than the fastest human can type.

      • by JMZero ( 449047 )

        What I'm saying is that typing speed is rarely an important bottleneck.

        Even when I'm working on reasonably straightforward stuff, I'm not just vomiting out big chunks of mentally pre-formed code. I'm looking around code that's there, doing trial runs, maybe writing down a little bit of stuff, checking a document, or bouncing an idea off someone.

        Time spent "deciding what to do" should normally outweigh "time spent typing that out". Like, if I had to code with one finger I would still be 95% as productive m

        • Well,

          from an "efficiency" point of view you might be right. I also spent a lot of time thinking and often don't type a whole day. However when I type I find it super boring and get often distracted by the idea: which tool could do this more easy, so I have not to type 24 hours now to get a basic thing working I was thinking over for two weeks.

  • by xack ( 5304745 ) on Sunday March 28, 2021 @11:44AM (#61208928)
    Spoken programming languages with a lot of symbols would be awful.
  • import Foundation
    imbibe Core Floaty

    class EatUpMartha {
    }

  • communication.
    we have people to people communication all the time.
    in crowds.
    what problem solvers are going have to solve for is a person communicating to a computer in a crowd of people.
    this is worthy of being a problem to solve

  • "Computer, create an adversary that can outthink Data."

  • I've used speech recognition since 1994 and through this whole time I've looked at different ways of using speech to create and edit code. I am very discouraged by what I see because people are still making the same mistakes in user interfaces today as they were back in the early 90s.

    Speech recognition is not a replacement for your keyboard. It is a different method of entering text and triggering commands. The difference between a GUI and speech interfaces is far greater than the difference between a GUI

    • Thanks for your informative post from experience.

      I mused a bit about voice-controlled programming circa 2000 when I was working at the IBM Speech Group, and came to the conclusion we needed a new programming language to make this easier. Something that seemed more like a human language, with entire words and not abbreviations or special characters. Maybe even something that in some ways looked somewhat like Forth (but with normal words)? Or maybe even a bit like HyperCard's HyperTalk ( https://en.wikipedia. [wikipedia.org]

      • I agree that a programming language designer on the properties of the speech driven environment would be a great solution but unfortunately the adoption of that would be even smaller than that of forth. I headed down the abstract syntax tree route because it seemed to be the best model for using larger language concepts such as "push to next index", "pop back" or "server list indexed by random server".

        for what it's worth the last example would generate something like:

        srvList [random_server ()]

        The log

  • In case of disability, speach may be a helpfull substitute interface to computers in general but not as input method for CLI or code.

    Rathen than "for(int x = 0; x 10; ++x) { cout x endl; }" maybe if I could say (think?) "output on console integers from 0 to 9"?

    To make it usefull we'd have to get the programming to the higher abstractions, sort of a change from "cooking recepy" to "ordering a meal in the restaurant" kinda-change.

  • Speech uses ~80% cognitive load (aka 'brain processing power'), typing only ~20%. So that's probably a dead end as researchers already have found out a while ago (ask Google, to lazy to look). If you're proficient at typing, you have a lot more brain free for actual programming brain work. Not so with speech. That's why good speeches require both writing and planning and then practicing. Speaking is way more complex.

    • by mark-t ( 151149 )

      Speaking might require higher cognitive load, but to be fair, if programmers took more effort to think about the code that they produced in the first place, there may be less bugs to fix later.

      Just saying.... a higher cognitive load required to express the code to a computer just to enter it might not be a bad idea.

  • We had this promise with visual programming. I was hoping it would succeed.
    But the reality is, I am programming things less visually today than I did 20 years ago.
    Part of the reason is I was using Delphi then. I am using Python and R now.

    Frontpage and Dreamweaver were choice web developer tools 20 years ago. Now its VS Code.
    We really didn't move forward on the visual front that much. I thought Eclipse was building critical mass for a visual RAD ecosystem, but we seem to have dropped it for VS Code and Atom.

  • Voice commands are a terrible input device. Its far easier and quicker to type and hit any of the many keys on a keyboard than say the voice equivalent, and im being generous and assuming that the voice software is perfect and always interprets and does the right thing with the spoken commands.
  • by Shaitan ( 22585 )

    I'd dictate the reason to this text to speech engine but I need to catch my breath and get a drink of water.

  • Ignoring all of the other issues that people have pointed out, I'd like to see how this would work with pair programming. How would the system tell the difference between discussing code and actually making changes?

    I overall get the feeling that the poster has no concept of what "new frontier" actually means. Major break through for the disabled? Possibly.
  • Want proof?

    Try reading the source code for a medium sized program out loud. How long until you go hoarse?

    Try dictating source code to someone who is typing it out. How long until you look and see a better way to do it? Then you have to move the code around by voice while possibly adding new code around the moved parts.

    This is a solution for a niche audience.
  • As the old line from usenet goes, the just-fired programmer comes out of HR office, and screams, at the top of their lungs, "command-system-run-format c: YES, YES, YES!!!"

Support bacteria -- it's the only culture some people have!

Working...