Sunday, December 15, 2013

Scientific Computing


There are a million uses for a computer in the field of science. A million. So as you may be able to guess there are about a million ways in which I could tell you scientific computing happens. But instead of boring you with endless lists of ways things happen in the lab I will just tell you a few ways that computers are useful to specific types of scientists.

The first is computational science. It is the science of constructing mathematical models and quantitative techniques to analyze scientific problems. Scientists write programs that model systems being studied and then use these programs on different sets of input parameters. The next is numerical analysis. Numerical analysis is defined as studying algorithms which use numerical approximation for mathematical analysis. Modern numerical analysis does not attempt to find exact answers, but instead to obtain approximate solutions while at the same time maintaining a certain small margin of error.  Symbolic computation is another branch of scientific computing. It specifically focuses on studying and developing algorithms and software for manipulating mathematical expressions or objects. These objects could be calculated to exact or inexact values.

Computational physics/biology/chemistry are the study and development of algorithms to solve problems in physics, biology, or chemistry. Computational physics was the first of the three to use computers to solve problems. An example of a problem solved in physics is the matrix eigenvalue problem, which finds eigenvalues and their corresponding eigenvectors for matrices very large in size. Lastly, computational neuroscience is the science of studying the information processing properties of the brain and nervous system. It is a way of modeling the essential features of a biological system in many ways. It generally deals with single-neuron modeling, development, axonal patterning, guidance, sensory processing, memory, cognition, discrimination, and learning.


Computer Graphics


Computer graphics are what make computer screens so fun to look at. What would computers be without graphics? Things would be quite a bit more difficult to learn without that handy image/graphic to reinforce all the concepts we learn everyday while online. The fact is that computer graphics are quite necessary and rendering them wasn't always an easy task.

There are a few different stages when it comes to generating visual images, and the first of them is 2D image or pixel computations. These deal with rotating or displaying images in a certain defined area and shape on the screen. Then things get a bit more complicated when you get into curve computations. These have to do with Bezier curves and matching these curves to other shapes to create images. Finally, we get into 3D computations. This involves rotating a 3D point or computing volumes of 3D shapes. It also involves taking a number of 2D objects and creating a 3D object from them, such as a mesh texture. It also involves intersection of objects in 3D and whether or not they are touching. These are just some of the light starter concepts when you begin to look into how computer graphics work and the algorithms behind them.


Monday, December 2, 2013

Communications and Security


What would we do without the internet? Or if I was to rephrase that question, what would the internet do without security? Would it be useful to us at all? I believe that it would still be useful, but not half as much. The internet would not be good for much more than information gathering.  What I am saying is that security is what makes the internet so useful to us all. It is what allows us to use the internet to transfer important data and to accomplish important everyday tasks online.

Without security, the internet is not good for much at all. That is why one of the greatest weapons to help you protect your security online is cryptography. “Cryptography has the power to provide secure communications, protect transactions, provide powerful privacy, and validate the integrity of information.” The one problem with cryptography is that most people don’t know how to use it or work with it effectively. There are numerous users on the internet (most of the general population in my opinion) who do not ever regularly inspect cryptographic security certificates from the secured HTTPS websites they visit. When these same average users install applications, they do not ever check whether or not they are from trusted sources (although even installing apps from trusted sources is not necessarily a guarantee of their security, either). The internet is growing at a frightening pace. At this stage of development, most new users to the internet have no idea about things like security awareness or security mechanisms they can use ot protect themselves. That is why we, as computer scientists, must pay close attention to making these cryptographic exchanges of information as foolproof and user-friendly as possible.


Artificial Intelligence



Very many decades have been spent in the attempt to emulate human intelligence with a computer. This was the original definition of artificial intelligence, after all. “The 1950s and ’60s believed success lay in mimicking the logic-based reasoning that human brains were thought to use. In 1957, the AI crowd confidently predicted that machines would soon be able to replicate all kinds of human mental achievements.” This was simply not true. Part of the reason for this fallacy of reasoning was because we still don’t really understand how the human brain works, which makes emulating it’s logical thought paths even more difficult. This is what caused a major shift in Artificial Intelligence technology: we did not understand what we were trying to emulate. So these days “Artificial Intelligence” as it’s called has changed shape to now fulfill certain discrete simpler problems at a time. “Today’s AI doesn’t try to re-create the brain. Instead, it uses machine learning, massive data sets, sophisticated sensors, and clever algorithms to master discrete tasks.”  

The fact is that computers lend themselves to certain types of tasks much better than they do to other kinds of tasks. The simplest example is that computers do not have any potential for emotion, only logic-based decisions. Computers need parameters in order to make a decision, whereas humans are capable of making decisions without any relevant data if they were so inclined. Even if a computer was to generate a random number, it would still have parameters on what kinds of numbers were within its domain or workable set. Due to these factors, I believe “true” artificial intelligence (emulating the human brain) is impossible for a modern computer to achieve. However, computers can achieve tasks that are useful to humans in so many other ways, why not redefine “artificial intelligence”? We have.



History of Computer Science



Computer Science has a rich history as it should, being “the science of using computers to solve problems”. There is no better way to live the easy life than getting a computer to do your work for you. Computers have changed the way the modern world works – in a big way. We depend on computers to do anything from basic mathematical calculations all the way up to rendering graphics in 3D or telling us the shortest path between two points.  We have now become fairly dependent on computers to perform daily tasks for us. If we had to convert back to analog methods, or in other words if we were now forced to get along without computers, our future would be pretty dim. The rate at which technology is being developed would almost flat-line. Very few people would even be capable of simple tasks for a computer like calculating their taxes to send in to federal and state.

To me, the history of computer science began with number theory. How can combinations of things now stand for other things of use? Once digital logic was invented, it was the birthing ground for a real computer with which to study and further develop computer science. Around 1900-1939 were the years where the necessity for doing complex mathematical calculations drove the development of the early computer “calculation machines”. Then around the 1940’s, the first useful electronic digital computer was born to Howard Aiken with the assistance of IBM. In the 1950’s, “In hardware, Jack Kilby (Texas Instruments) and Robert Noyce (Fairchild Semiconductor) invented the integrated circuit in 1959.” It was not until the 1960’s though that computer science really came into its own as a discipline. In the 1970’s and 80’s, a public-key cryptosystem (RSA) was created and Apple computer brought on the personal computer, respectively. These days, computers are getting smaller and smaller, due to the birth of Nano-technology. Thanks to the “information superhighway”, the rate at which new findings or data is shared is simply astounding. The internet is also a large contributor to the pace at which computer science has developed in the last twenty years.


File Sharing


File sharing is just what it sounds like: sharing files with someone else either over the net or physically. Sounds simple, right? Well, it gets way more complicated when copyright laws come into play with the information that is getting distributed. File sharing has become a great concern for copyright holders in the digital media industry, namely the film and music industries.

To date, there are more than a few ways to share files on the net as well, which is where things really start to get complicated. One of these ways involves directly posting files to a server and allowing people to download them. This method is oftentimes not very reliable because the filenames are usually modified to prevent the owner of the information from knowing exactly what is contained in the download. Once the owner of the information finds it, they will force the server to take down the link to the file. Another method is called peer-to-peer or P2P for short. This is a method of sharing files where one person makes the file available directly from their computer to the other computer which is downloading it. Another method or style of P2P is called torrenting, and it deals with files that are broken up into tiny pieces and then a torrent file (.tor extension) contains instructions on how to put the pieces together again. Numerous lawsuits have been fought on the topic of file sharing, but no matter the number of lawsuits won or lost it hasn’t affected the desire for millions and millions of people to continue sharing these files back and forth every single day.


Data Structures



A data structure is “a particular way of storing and organizing data in a computer so that it can be used efficiently.” There are many different kinds of data structures. Usually one data structure is chosen over another because it is more useful to the task at hand than other data structures.  Some examples of data structures are arrays, records, hash tables, unions, sets, graphs and objects. The purpose of all of these is to manage large amounts of data efficiently. For example, an array stores its elements in a specific order. There is an index and an element. The index tells where in the array that the element is stored. The element in an array can often be of any data type. Arrays, for example, can usually always contain other data structures within them such as a nested array, where one array is nested inside another one.


Data structures are part of the beauty of modern high level coding languages. They can store data in many more ways than just the few primitive data types of many years ago. Being able to store data in so many more complex data structures shortens the amount of time and code that it takes to manipulate the data itself. The more modern data structures save even more time when it comes to entering large amounts of data as well. Overall, data structures are one of the most important concepts to modern computer science because without a way to store the data, how could we ever hope to manipulate it?


Hacking



Hacking has always been a problem and a blessing since the beginning of time for computer users. From the conception of the first “secure” system there was at least one guy hacking it. In fact, hacking is the perfect way to test a system for potential backdoors or other security concerns. Hacking is a necessity and an evil at the same time. It stops us from having systems that we can call completely secure. 

There are many different reasons to hack a system. One of them is for security. To tell whether or not a system is secure, you need to have someone hack it and see if they can get into it. These hackers are called “white hat” hackers because their intentions are non-invasive/non-malicious. Then there are “grey hat” and “black hat” hackers. “Black hat” hackers are hackers whose intention is malicious or only for personal gain, so they are mostly criminals. The “grey hat” hackers are a combination of black and white, or in other words part of their work is for personal gain and part of it is for a legitimate reason. From the CA PENAL CODE SECTION 484-502.9, a hacker is: “Every person who shall feloniously steal, take, carry, lead, or drive away the personal property of another… ... or who shall knowingly and designedly, by any false or fraudulent representation or pretense, defraud any other person…” Hacking is also a great way to show your skill as a computer scientist. It takes a full understanding of how a system works in order to be able to hack said system. 


Monday, October 14, 2013

OPEN SOURCE: A proper replacement for proprietary software?


The Open Source model is a different one, standing for different values than its commercial counterparts. Open source is a great idea because it allows for the source code to be available to the general public for any reason, including copying, modifying, and redistributing it. It has increased transparency of code for many purposes including research. Open Source software has brought us some great names in software, including 7-zip, Firefox, Chrome, and OpenOffice.org. It has also brought us some great Operating Systems, such as GNU/Linux and Android.

There are many advantages to Open Source software model. The first is that there are a very low amount of licensing fees. It is easy to manage. It allows for continuous improvement. It allows companies to be independent. It allows people to view and analyze and learn from the source code.

However, whatever goes up must come down. There are a few notable downsides to the open source software model that deserve to be talked about. One is that it usually always involves unanticipated implementation and support costs. It has a large learning curve. You can’t just sit down tone day and analyze OpenOffice.org source code and fully understand it, it takes a long time to get your staff up to speed. Version control – lack of potential documentation may lead to version control issues which will inevitably later lead to compatibility issues on proprietary platforms. Many project leaders up and leave leaving nobody left to complete or maintain the software. There is also nobody to answer your technical questions if you have any because no open source software has a good tech support program.

So, as you can see, it’s not all a walk in the park for open source software, but it is definitely a step in the right direction.

Img courtesy: voipfreak.net

Tuesday, October 8, 2013

AGILE: One of the most efficient software development schemes


Agile is a system of software development practices which promises low overhead, high flexibility, and satisfied customers. Although it may sound too good to be true, Agile development practices have led to many benefits for leading organizations for years. According to Wikipedia, “Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams.” One of its major benefits is that it uses a time-boxed iterative approach, and allows for rapid response to changes made to the requirements of the software.

In the mid-1990’s, Agile software development evolved from the earlier and much heavier weight waterfall approach which used to be commonplace among software developers.  Waterfall is a sequential design process, where progress is seen as flowing steadily downwards throughout the stages of the software processes. The waterfall moves from one stage to another starting with the requirements then moving on to the design, implementation, verification, and maintenance. At any one given time, the group is only working on one of these items at a time.


This is where the benefit of Agile can be seen. Working software can be published sooner because the developers are more self-motivated and self-driven. The developers assign portions of the job to themselves so they can first sense where their skillset will be of the most use.  Agile allows incomplete but working software to be turned out sooner, giving the development team something to show the customer to keep them interested and funding the team. From the initial skeleton of the program forward, Agile makes it easier to deal with inevitable changes to the software that will happen. Agile is more prepared to deal with these issues because it allows a version system of the software to be released, each time building off the previous version. By the time the customer is satisfied with the product, there may have been more or less versions of it. That in itself is another benefit of the Agile development process because it allows more or less work or specialization to be done by request. Also, if one developer leaves the team, the team will be less devastated than with the Waterfall development model, making the company less dependent on each individual software developer. Is that a good thing for the developers? No. However, it is one of the most efficient software development models out there, so we should all become familiar with it. 

Friday, September 20, 2013

LinkedIn and Branding: networking your way to success


LinkedIn is not just a place to post your resume, but instead, a conveniently accessible community that fosters networking and more opportunities for all.  In more detail, it is a channel for both job and employee seekers to more quickly and easily find what they are looking for. One can simply search for certain key words to have a wealth of relevant search results at his fingertips.

LinkedIn is a playground for professionals to stay in touch, to go beyond the scope of their own resumes. It is a place where a professional can build and improve himself based on different exposures too.


This online professional profile site provides a wealth of tools for the user. For example, if a job seeker wants to be introduced to someone his colleague knows, he can simply request his colleague introduce him to the third party. Another great aspect of LinkedIn is the fact that users can distinguish what the goals of fellow LinkedIn members are interested in – and are therefore able to ask and discuss relevant questions and topics, respectively. The user not only has the chance to post facts about his work history and education, but has the opportunity to flash a little personality, which provides a better picture of himself as a whole. Overall, LinkedIn is the gateway for one to truly network, connect and keep in touch with important professional opportunities helping people promote themselves as brands. 

Friday, September 13, 2013

QR Codes: Deserve more attention than they’ve been getting

This cool-looking QR code links users right to my Blog.
What is a QR code? For those who know what it is how many of you have seen one around and recognized it? Among those who recognized it, who knew that they needed a smart phone and a special app just to scan the code?

For those who don’t know, a QR code is a 3D barcode (just like the ones you find on products in stores, but square and 3D) which stands for an alphanumeric character set instead of just a numeric character set. So, as you can imagine a QR code is the easy way to “type” a select amount of information (letters and numbers) into a scanner, and do something with that information. A classic example would be scanning a QR code which contains a website’s URL and takes you to the website. They are very useful, but haven’t been promoted enough yet.

These are today’s problems with QR codes. The technology is genius; however the utility is diminished due to difficulty of use. The posters, billboards or websites which proudly display these barcodes of wonder don’t even suggest an application with which to scan the code! I do not believe any of this will stop QR codes from ruling the marketing world in the near future, however I do believe that these reasons are a large part of why they have taken so long to catch on. “It doesn't stop there - a QR Code can also contain a phone number, an SMS message, V-Card data or just plain alphanumeric text, and the scanning device will respond by opening up the correct application to handle the encoded data appropriately courtesy of the FNC1 Application Identifiers that are embedded in the encoded data.” (*) So I say "use 'em everywhere", because there's a lot of free marketing to be had from these things!

(*) http://www.qrstuff.com/qr_codes.html

Img courtesy: http://www.beautifulqrcodes.com/index.php?

Friday, September 6, 2013

Social Networking and security... or lack of when promoting your brand.


All brands need promotion in order to keep their name known. There are many ways to promote your brand these days, whether it's a poster on the front window of a business or a more widespread method like using social media for brand promotion. Social media has proven itself as one of the most cost effective ways to promote your brand or business in today's age of technology. Social media, however, has recently been known to have a few down comings for small businesses if not used correctly. 

Security shortfalls have been one of the major worries for smaller to medium businesses promoting their brands through social media.  Although they are one of the best low-cost online marketing tools, the malware involved and the lack of a clear list of rules for employees to follow oftentimes conflicts profitability. According to Panda Security’s Social Media Risk Index for Small to Medium Sized Businesses: “Panda conducted a survey of over 1,000 small to medium businesses and found that 35% of small to medium businesses had suffered a financial loss due to their involvement in social networks, with 35 percent suffering losses in excess of $5,000.” (*)

The general advice to maintain profitability involves creating a clear policy of expectations or guidelines that your employees need to follow. The first and most important is to protect your sensitive data. Be careful what your employees post about the inner workings of your company. It is very important to keep your reputation high, because as we all know reputations are easier to destroy than they are to make. Talk to your employees about how much time they are spending on these social media sites, because spending too much time on these websites can be counter-productive. Last but not least, Social networks are infamous for spreading viruses and malware through downloads and links from “people in your network”. It’s a common fallacy that links or downloads on social networks are safe, they’re oftentimes not. Be careful what your employees are doing online while they are at work! Most importantly, have a plan in case something goes wrong! Don’t get caught in the rain without an umbrella.


img courtesy: www.hannity.com

Friday, August 30, 2013

A Welcome Note

Hello everyone,

I'm Brian Guilardi, a Computer Science student at San Jose State University. I'm posting primarily to introduce myself to you all but also to invite all of my peers, teachers, and friends to visit my new series of Blogs I will be publishing this fall 2013 semester. My technical expertise lies mostly in problem solving, whether it be programming micro-controllers or writing scripts to automate processes, I love finding solutions to problems. For these reasons, the field of Computer Science interests me heavily.

Computer Science is a nonstop technical race to learn the newest cutting edge technologies, languages, libraries, algorithms, and development environments. It is a non-stop adventure that begins with identifying a problem, mapping a solution, creating a solution, and then maintaining said solution. It is a really exciting field to be in these days, because it changes from one day to the next. New technologies are invented constantly and are implemented and absorbed almost instantly by the Computer Science/Software Engineering community.

My love for Computer Science came about when I realized the strengths that computers really have. In my opinion, anything that involves math or repetition is not the job of a human, but instead the job of a computer in today's world. People should no longer have to do menial repetitive labor, a good example being machining parts, picking vegetables or assembling automobiles. If the process is the same over and over again, it can be automated through the use of software and hardware. That is the true beauty of Computer Science, and the real reason why I love it so much.

Img courtesy: ceecs.fau.edu