tag:blogger.com,1999:blog-51607862822918527512024-02-19T06:14:52.378-06:00Never Stop Learning Computer ScienceLearning about all things computer science, especially machine learning, natural language processing and computer architecture. karenmazidihttp://www.blogger.com/profile/04174349917225946044noreply@blogger.comBlogger12125tag:blogger.com,1999:blog-5160786282291852751.post-86613100981049518492021-01-14T14:53:00.003-06:002021-01-14T14:55:36.103-06:00RISC Architecture<p> I accidentally ran across <a href="https://www.youtube.com/watch?v=DIccm7H3OA0" target="_blank">this video </a>from 1986, and I love it for so many reasons. First, the video is about RISC architecture, which I teach in my undergrad course. Second, it calls to mind a show I really like, <i>Halt and Catch Fire</i>, since the video is from the same era that the show portrays, the early days of my computer science adventures. Surprisingly, given that the video is 34 years old, it still is a good overview of RISC concepts.</p><p>Here are some time-stamped highlights:</p><p style="text-align: left;"></p><ul style="text-align: left;"><li>1:16 The PC alternative shown is Leading Edge (1200 baud modem woo-hoo!) My husband bought this stock way back when, and lost a bundle. He started calling the company "Bleeding Edge"</li><li>1:50 The guys play the "RISK" board game (flashback!) while describing technology challenges in scaling up to memory as MASSIVE as 8M!</li><li>5:00 David Patterson appears as his younger self</li><li>11:15 Ridge Computers? gone to the dustheap of startups, I suppose</li><li>13:45 Check out the boxy mouse! The machine costs $10,000. Yikes!!!</li><li>19:25 Palo Alto experts highlight problems of introducing RISC to the market. Palo Alto was the research group that inspired Steve Jobs with many of their innovations, which he gleefully borrowed, and then complained when Bill Gates did the same.</li><li>24:40 Lotus 123's HAL AI program. Seriously?</li><li>25:10 News Flash: Computers will expand class size. C'mon, class size is driven by financial consideration. </li><li>25:30 A preview of new and modern software (sarcasm)</li></ul><p></p><p>Everything old is new again: Beards. Thin ties. Terrorism. The promise of a molecular computer. </p><p>Enjoy: <a href="https://www.youtube.com/watch?v=DIccm7H3OA0">https://www.youtube.com/watch?v=DIccm7H3OA0</a></p><p><br /></p>karenmazidihttp://www.blogger.com/profile/04174349917225946044noreply@blogger.com0tag:blogger.com,1999:blog-5160786282291852751.post-2643769349968267312020-12-16T15:29:00.002-06:002021-01-14T14:00:46.665-06:00Your Brain on Code<p> Is reading code like solving math problems? or is it more like reading in general? Neither, it turns out.</p><p>MIT researcher Eveline Fedorenko and team placed coders in a functional magnetic resonance (fMRI) scanner, and showed them code in a programming language in which they were proficient, then asked them questions about the code. </p><p>Although coding languages have many things in common with human languages, like meaning units and syntax, the fMRI showed that reading code did not significantly activate the regions of the brain associated with language. These language regions of the brain, including Broca's area, are primarily in the left hemisphere. </p><p>Instead, the distributed parts of the brain known as the multiple demand (MD) network were activated across the brain. These regions had previously been shown to be activated for complex mental tasks like solving math problems, or doing crossword puzzles. However, the regions activated for the coders were not identical to those activated for math problems. Whereas math and logic problems tend to activate multiple demand regions of the left hemisphere, coding appears to activate both the left MD region and the right MD region. </p><p>The lead author of the study, Anna Ivanova, said “Understanding computer code seems to be its own thing. It’s not the same as language, and it’s not the same as math and logic,” </p><p>In short, coding is a unique cognitive load, but you knew that already, didn't you?</p><p>Check out the published article here: <a href="https://elifesciences.org/articles/58906">https://elifesciences.org/articles/58906</a></p>karenmazidihttp://www.blogger.com/profile/04174349917225946044noreply@blogger.com0tag:blogger.com,1999:blog-5160786282291852751.post-83428109178265859012020-07-19T12:47:00.000-05:002020-07-19T12:47:01.229-05:00Google is watchingGoogle is watching you, but you knew that already. Several years ago I saw a pop-up like this:<div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnEC9otz6RZ6ygIKKv6iD4BeJrcz8fG9AoGJFAJubOCQWXw8Ws_NdLjZg-Fxyh6UQVdsthxcqO3m7BP1DnMODOwcFcuZNU-6PjZJKdsBxMQBuG1U16gUyYsBNYixWIBS77qFhiU15B3e2m/s1323/Screen+Shot+2020-07-19+at+12.39.28+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="222" data-original-width="1323" height="133" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnEC9otz6RZ6ygIKKv6iD4BeJrcz8fG9AoGJFAJubOCQWXw8Ws_NdLjZg-Fxyh6UQVdsthxcqO3m7BP1DnMODOwcFcuZNU-6PjZJKdsBxMQBuG1U16gUyYsBNYixWIBS77qFhiU15B3e2m/w781-h133/Screen+Shot+2020-07-19+at+12.39.28+PM.png" width="781" /></a></div><div><br /><div><br /></div><div>It scared the heck out of me so I just exited out of Chrome. It popped up again today and I hit 'I want to play' and this popped up:</div></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwezJpRKj_gbRfvkH42cz05dZCsZtuAdVvVJqt2USNnRYwBY8T7Vtzixm_rpRU-FxGorDM5EjzEGKOD85WhGiQdKX7hwnUdq7XGPfZW3-wI6vSRuZmtGe8tK7yVdmjatl2AsOzXbDGhWpF/s1358/Screen+Shot+2020-07-19+at+12.43.10+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="419" data-original-width="1358" height="243" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwezJpRKj_gbRfvkH42cz05dZCsZtuAdVvVJqt2USNnRYwBY8T7Vtzixm_rpRU-FxGorDM5EjzEGKOD85WhGiQdKX7hwnUdq7XGPfZW3-wI6vSRuZmtGe8tK7yVdmjatl2AsOzXbDGhWpF/w781-h243/Screen+Shot+2020-07-19+at+12.43.10+PM.png" width="781" /></a></div><div><br /></div><div>Wow. I have heard that this is how Google recruits developers. I don't know if that's true but I exited out of this. Maybe next time . . .</div>karenmazidihttp://www.blogger.com/profile/04174349917225946044noreply@blogger.com0tag:blogger.com,1999:blog-5160786282291852751.post-81440504515932906692020-07-10T11:28:00.000-05:002020-07-10T11:28:10.906-05:00Innovation is about more than technology<div>I was watching an old Twilight Zone last night, filmed in 1962. The one where an elderly couple comes into a gleaming tech company to purchase new, young artificial bodies for themselves. Spoiler alert: the new technology has unforeseen negative consequences. (Isn't that the theme of every sci-fi work?)</div><div><br /></div><div>While they are touring displays of young, healthy bodies they may inhabit, for a price, a leggy secretary comes in and tells the executive that he has a call on the video phone. Oh how modern. A video phone. Artificial biology. And yet, a woman still has to be the secretary?</div><div><br /></div><div>Why was it easier for the writers/producers of this show to imagine videophones, brain transference technology, and more, but they could not imagine a shift in gender roles? Shiny new objects can gain entry into our lives easier than new ideas. </div><div><br /></div><div>It's always easy to look back and see what others did not see due to their cultural blinders. But what are missing in this moment? How will people 50 years from now look at the choices we are making?</div>karenmazidihttp://www.blogger.com/profile/04174349917225946044noreply@blogger.com0tag:blogger.com,1999:blog-5160786282291852751.post-74351807572640995702020-06-27T08:38:00.002-05:002020-06-27T08:39:56.892-05:00Ethics in AI<div>This <a href="https://qz.com/1874134/computer-scientists-will-enforce-rules-for-ai-ethics/" target="_blank">article from Quartz</a> came into my phone's news feed this morning. AI conferences are beginning to think about ethics. Finally. Specifically, the premier machine learning conference NIPS requires a broader impact on society statement for presented research. One of the top natural language processing conferences, EMNLP, will now reject papers on ethical grounds. Whether these moves actually keep AI ethical or just push the dirty parts underground remains to be seen. But we all have a voice in keeping each other in check. </div><div><br /></div><div>When I was a PhD student, our advisor had us sit in on talks by prospective faculty and arranged for the prospect to meet with just the students. I remember one young man boasting of one NLP project after the other, each more horribly intrusive on innocent lives than the last. All of these done in internships with a major computer manufacturer. Finally, when we got to questions, I asked him, "Can you describe any project you have done which makes the world a better place." He was taken aback but managed to mumble something about helping the corporation make more money.</div><div><br /></div><div>Leaving the meeting, one of my male peers said 'Wow Karen, high five, you had a lot of balls to ask that question.' The truth is I was uncomfortable asking that question, it felt aggressive and I normally try to just be invisible and mind my own business. But as this article points out, the degree to which we hold each other accountable is the degree to which ethics may be at least be some factor in the equation of research.</div>karenmazidihttp://www.blogger.com/profile/04174349917225946044noreply@blogger.com0tag:blogger.com,1999:blog-5160786282291852751.post-25249434078534863052020-05-08T20:33:00.001-05:002020-05-08T20:40:13.269-05:00ARMing Data Centers<div><br /></div><div>The server microprocessor market hovers around $15 billion/year. The market is largely dominated by Intel, although AMD is hoping to bite off 10% this year. </div><div><br /></div><div>In contrast, ARM historically focused on the mobile device and IoT markets, but now ARM is shouldering its way into the cloud computing market. Case in point is Amazon's Graviton2 custom processors that use 64-bit ARM cores. Eyes are on ARM due to its reputation for power-efficient cores with good performance. </div><div><br /></div><div><h3>Ampere's Antra Chip</h3></div><div><br /></div><div>A new entry into the market this year, Ampere's Altra chip, may be a game-changer. The Altra features up to 80 single-threaded cores utilizing ARM v8 chips. Ampere's Renee James, a former president of Intel, founded Ampere Computing in 2017 and is currently its Chairman and CEO. Ampere markets the Altra as 'the world's first cloud native processor'. Ampere's Mt. Jade rack dual-socket rack server includes fast DIMM and NVMe solid-state drive support. <a href="https://www.youtube.com/watch?v=X9RYrcU6JQA&feature=youtu.be" target="_blank">The Altra is a chip to watch.</a></div><div><br /></div>karenmazidihttp://www.blogger.com/profile/04174349917225946044noreply@blogger.com0tag:blogger.com,1999:blog-5160786282291852751.post-66184279547613313842018-09-22T14:00:00.001-05:002018-09-22T14:00:09.299-05:00DFW R Users GroupThe DFW R User's Group has a monthly <a href="https://www.meetup.com/Dallas-R-Users-Group/" target="_blank">Meetup meeting</a>. On Saturday September 22, 2018, I gave an overview of machine learning with R based on my book.<br />
<br />
Here is a link to the <a href="https://drive.google.com/open?id=1lKbBMMrEutzf2GsnzK2rp1SCPptC-6dy" target="_blank">presentation</a>: If you just see html, click the "open with chrome" link at the top.<br />
<br />
Here is a link to the quick <a href="https://drive.google.com/open?id=1ESzfS6OLn_ZWNUErD7_lBwHysm3jpMf_" target="_blank">demo</a>.<br />
<br />
Download the demo and presentation files <a href="https://drive.google.com/open?id=1yIBUVaFF0O8LaWk5XVaFSY7B22tzfzRc" target="_blank">here</a>.<br />
<br />
I had a great time and I highly recommend this group to anyone interested in R.karenmazidihttp://www.blogger.com/profile/04174349917225946044noreply@blogger.com0tag:blogger.com,1999:blog-5160786282291852751.post-17276567637111384972018-09-06T14:08:00.001-05:002018-09-06T14:10:34.293-05:00Data SetsOne of the hardest parts of starting out with machine learning used to be finding good data. The <a href="https://archive.ics.uci.edu/ml/index.php" target="_blank">UCI Machine Learning Repository</a> was the most comprehensive site, and still is a great resource. The site currently has over 440 data sets. Later, <a href="https://www.kaggle.com/" target="_blank">Kaggle</a> and similar sites became popular. Google AI is becoming a comprehensive resource for both tools and data. As of this writing, Google has made available <a href="https://ai.google/tools/datasets/" target="_blank">over 50 data sets</a> for machine learning, and I'm sure more will come in the future.<br />
<br />
It's an exciting time to get into machine learning!<br />
<br />
<br />
<br />
<br />
<br />karenmazidihttp://www.blogger.com/profile/04174349917225946044noreply@blogger.com0tag:blogger.com,1999:blog-5160786282291852751.post-11461448699338872122018-07-15T12:58:00.002-05:002018-07-15T12:58:55.607-05:00Chess with Robots<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhw-i8ubCXDrMc6fmYjpgU8sPXw9YY8Re3IibbyFErHH8btyPEbMSpvma04LnPpg_YhowPnfz8bgZ-pyb0NsE1dS4BqTOs1PBJGoDAzEtz55RecB40yY9i8hlkIcjU-pA1QBEkN0uoEtQ4G/s1600/Screen+Shot+2018-07-15+at+12.51.58+PM.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="342" data-original-width="489" height="223" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhw-i8ubCXDrMc6fmYjpgU8sPXw9YY8Re3IibbyFErHH8btyPEbMSpvma04LnPpg_YhowPnfz8bgZ-pyb0NsE1dS4BqTOs1PBJGoDAzEtz55RecB40yY9i8hlkIcjU-pA1QBEkN0uoEtQ4G/s320/Screen+Shot+2018-07-15+at+12.51.58+PM.png" width="320" /></a>I just found out about this ongoing project at UTD affiliated with our Robotics and Automation Society. The picture is of UTD's Chess Plaza on the central mall of our beautiful university. The goal of the project is to build robots that can autonomously navigate this chessboard. People can then download an app and play a chess game against the robots. How cool is that? I'll post again when the project is ready to play.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />karenmazidihttp://www.blogger.com/profile/04174349917225946044noreply@blogger.com0tag:blogger.com,1999:blog-5160786282291852751.post-57525459821624982222018-07-08T10:39:00.000-05:002018-07-13T10:05:55.459-05:00Can NLP Save the World?I was reading over <a href="https://nyti.ms/2J4oSRF" target="_blank">Maureen Dowd's article this morning</a> about Twitter civility, when my eye stopped at this statement by Farhad Manjoo, that Twitter had “tweaked its central feed to highlight virality, turning Twitter into a bruising barroom brawl featuring the most contentious political and cultural fights of the day.” Really? I've had a Twitter account for a few years and I never get any politically virulent tweets in my personal feed, but then I only follow NLP and ML researchers, people I have worked with, a couple of literary sites, etc., pretty nerdy stuff. I vote, I keep up with issues by reading national news, I occasionally write my Senator and await his patronizing reply. I would call myself mildly politically active but I have always thought that posting something political on the Internet was pointless. It seems that a lot of people feel otherwise. I've been missing out on the bruising barroom brawl. Thankfully.<br />
<br />
The article went on to mention that <a href="https://blog.twitter.com/official/en_us/topics/company/2018/twitter-health-metrics-proposal-submission.html" target="_blank">Twitter was proposing using dialog health metrics</a> to monitor the tone of the Twitter discourse in terms of four characteristics of healthy dialog identified by the non-profit Cortico research organization: (1) shared attention, (2) shared reality, (3) variety of opinion, and (4) receptivity. Twitter is calling out for expert help (see link) to implement tools for monitoring the discourse. This will be a very interesting NLP task to watch in coming years. We know that NLP and ML have been used to try to influence both the Brexit vote and the 2016 US election. Now it will be interesting to watch if NLP and ML can save us from lies, manipulation, and trolling. What if a truthfulness (or lack thereof) icon were to appear next to tweets containing nonsense that has already been debunked by sites like Snopes? That seems like a relatively easy first step for an NLP/ML application. But how will people feel when an algorithm calls them a liar? This is going to get interesting.<br />
<br />
<br />karenmazidihttp://www.blogger.com/profile/04174349917225946044noreply@blogger.com0tag:blogger.com,1999:blog-5160786282291852751.post-57997301211011811262018-07-06T12:16:00.000-05:002018-07-13T10:05:26.175-05:00Why R?It's been interesting to watch the competition between Python and R over the past few years to be the <i>numero uno</i> language for machine learning. In my opinion, you should learn both. I use Python for NLP and R for machine learning, although sometimes I do a little NLP in R and a little machine learning in Python. It's basically a Toyota v. Honda debate. They're both great. The <a href="https://www.kaggle.com/surveys/2017" target="_blank">latest (2017) survey from Kaggle</a> shows where we are:<br />
<ul>
<li>Python is the most used tool but statisticians prefer R for their ML work. </li>
<li>Titles vary by country for these professionals, but the most common is Data Scientist.</li>
<li>The majority of survey participants have a Master's or PhD degree.</li>
<li>The top four algorithms were: logistic regression, decision trees, random forests, and neural networks. </li>
</ul>
<div>
Why do I teach machine learning with R?</div>
<div>
<br /></div>
<div>
The main thing I like about R for beginning machine learning aficionados is that it gets out of the way. The syntax is straightforward enough that you can focus on the machine learning instead of the R syntax. With R, everything is in one environment and you can <i>seamlessly</i> switch from data visualization to machine learning to statistics on your results to more visualization. The beautiful free IDE <a href="https://www.rstudio.com/" target="_blank">RStudio</a> is another plus for R. With recent addition to R like the <a href="https://www.tidyverse.org/" target="_blank">tidyverse</a>, <a href="https://keras.rstudio.com/" target="_blank">Keras</a>, and so forth, there are no limits to what you can do efficiently in R.<br />
<br />
So get started!<br />
<br />
Link to download R: <a href="https://www.r-project.org/" target="_blank">https://www.r-project.org/</a><br />
Link to download RStudio: <a href="https://www.rstudio.com/" target="_blank">https://www.rstudio.com/</a></div>
karenmazidihttp://www.blogger.com/profile/04174349917225946044noreply@blogger.com0tag:blogger.com,1999:blog-5160786282291852751.post-72798029160892801852018-07-04T15:39:00.000-05:002018-07-13T10:04:27.719-05:00Here we go . . .<span style="background-color: white; color: #212529; font-family: "lora" , "times new roman" , serif; font-size: 20px;">Today I'm starting this blog to focus on my primary interests: machine learning and natural language processing. Initially the focus will be on my new book on machine learning. Code samples are available on my github: </span><br />
<br style="background-color: white; box-sizing: border-box; color: #212529; font-family: Lora, "Times New Roman", serif; font-size: 20px;" />
<span style="background-color: white; color: #212529; font-family: "lora" , "times new roman" , serif; font-size: 20px;"><a href="https://github.com/kjmazidi/Machine_Learning">https://github.com/kjmazidi/Machine_Learning</a></span><br />
<br style="background-color: white; box-sizing: border-box; color: #212529; font-family: Lora, "Times New Roman", serif; font-size: 20px;" />
<span style="background-color: white; color: #212529; font-family: "lora" , "times new roman" , serif; font-size: 20px;">I accidentally started writing the book during Spring Break this year. I was thinking about the undergrad course I teach and how there was not a book that fit what I needed to teach. My first thought was to organize my notes in latex. This turned into a book. I'm almost finished, and plan to have it ready to go by 8/1/2018. Stay tuned!</span>karenmazidihttp://www.blogger.com/profile/04174349917225946044noreply@blogger.com0