1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Redesigning the computer

Discussion in 'Computer Hardware' started by amusicsite, 17 Jan 2018.

  1. amusicsite

    amusicsite dn ʎɐʍ sᴉɥ┴ Staff Member

    Location:
    UK
    There are parts of the computer that are there for legacy reasons or technical limitations which make them less than optimal. There are always people out there looking at ways to radically redesign the computer by changing the materials and connections used or optimising how they perform. In this thread we will look at some of the research and potential breakthroughs. These include things like using light instead of electricity, memristors, bio-computers and the like.

    To kick things off there is HPE, HP's Enterprise R&D arm, The Machine project.

    https://www.labs.hpe.com/the-machine

    The concept is "How would you make a computer if you had a pool of fast non-volatile memory with unlimited storage?"

    They have been looking into it for a number of years and if we manage to get memristors working they could be in a good position to take full advantage of it. Last year they showed off a prototype machine which shows promise.

    https://www.scientificamerican.com/...en-computer-sans-much-anticipated-memristors/

    This field is called memory-driven computing where you have the huge pool of memory that you can use for storage and processing. Potentially getting rid of any sort of cache as you just work directly on the data without having to move it. This could remove a lot of the load associated with memory and storage management which would speed things up a lot.

    Here's a good look at what was in their prototype machine, which seem to mainly be a working test bed that they can swap in/out different components as new technologies come to market.

    https://www.nextplatform.com/2017/01/09/hpe-powers-machine-architecture/

    One of the goals of this research would be to produce a real System on Chip (SoC) where you would have the CPU, GPU, memory + storage along with a coms chip. Theoretically you could have one chip with two pins for power and two pins for a wi-fi areal, wireless controllers and display and your good to go. One of the big advantages of this would be that data would never have to leave the chip and can travel at maximum speed without any bottlenecks. This alone could drastically speed things up.

    The main hold-up to this at the moment is memristors, as soon as we get these working memory-driven computing will probably become the new standard.
     
    Yellow Fang likes this.
  2. amusicsite

    amusicsite dn ʎɐʍ sᴉɥ┴ Staff Member

    Location:
    UK
    https://en.wikipedia.org/wiki/Biological_computing

    One of the other big promises that has yet to materialise is the Biological computer. With the potential to grow more resources or memory as you need it, this has always been a compelling goal.

    As far as storage goes there is not much better than DNA. The first attempt to sequence the human genome was completed about 20 years ago and cost about $1 billion. These days you can get your own DNA read for about $100 so it's been a fast moving filed where the costs have dropped dramatically.

    Obviously storage is only one part of the system. You also need to be able to read and write to it, as well as some sort of logic and switching capabilities. We have another part of the puzzle with the Transcriptor.

    "A transcriptor is a transistor-like device composed of DNA and RNA rather than a semiconducting material such as silicon."
    https://en.wikipedia.org/wiki/Transcriptor

    One of the main problems is they are very slow, which may always be true of bio-computers. Then again if you could make them massively parallel they might still have a place as well as the option for them to work in places other silicon chips find it hard, like inside human bodies.

    CRISPR could also used to store information in DNA.

    "Shipman encoded a GIF of Muybridge’s horse into DNA, and then inserted those strands into living microbes using CRISPR. This technique is best known as a tool for editing genes by cutting strands of DNA at precise locations. But it has another trait that’s often overlooked: It’s an amazing tool for recording information. Shipman effectively turned bacteria into living hard drives."
    https://www.theatlantic.com/science...o-store-images-and-movies-in-bacteria/533400/

    There is still a long way to go before we get anything close to a traditional computer made purely from biological process but there does seem to be breakthroughs every few years and there could be a new breakthrough or two that take us closer to getting them working.
     
  3. amusicsite

    amusicsite dn ʎɐʍ sᴉɥ┴ Staff Member

    Location:
    UK
  4. amusicsite

    amusicsite dn ʎɐʍ sᴉɥ┴ Staff Member

    Location:
    UK
    Next up in how computers are changing is the A.I. revolution.

    https://arstechnica.com/gadgets/2018/07/the-ai-revolution-has-spawned-a-new-chips-arms-race/

    This is, at the moment, not you general advancement in your CPU and GPU to make things better but instead heavily optimising systems to do a single complex task really well. Bit like the maths co-processor of early days. The maths co-processor ended up embedded in the CPU die, along with many other unique chips that used to be a separate entity on the motherboard.

    For the A.I. systems at the moment there seems to be a few different tasks that people are working on.

    1) The server cruncher.

    This is your beast of a machine with super fast connection. Like Google's Tensor processing unit. It's main aim is to crunch a load of data. Whether it's analysing trillions of photos, streetmap data or playing Go a billion times. This is you workhorse and throughput per watt / amount of cooling need is key here, as well as how many flops you can get out of a shipping container size unit.

    2) The small but dedicated.

    This is the light weight chip that does one thing only with very low power use. You know for phones and things like that. Arm have a few new processors in this market. The ML Processor for machine learning, OD Processor for object detection and there are others around too. These can then do things like be added to any camera so that it can detect objects from a predefined set of rules or be used to offload demanding tasks from the main CPU/GPU.

    If you think about it the small dedicated areas to perform certain tasks, sounds a lot like how our brains work. I've often thought that robots would really take off once you had dedicated chips to do certain tasks. It removes bottlenecks and seems to be how nature does it.

    Obviously if you have power to spare you can scale these up for things like cars like the NVIDIA Jetson Xavier system or everything you need for a self driving car.

    3) The playground.

    The final type of computation needed is the virtual environment. Yep you can get you automated cars to drive for a million miles to see how they perform but you can do that in a fraction of a second in a virtual environment. Here speed, accuracy and multiple instances are the important things. If you can run a million virtual machines you can get your virtual cars driving a million virtual miles a minute. Similar to the cruncher but less about data throughput and more about running the same test in many different ways or virtual conditions. So when you let it loose in the real world it's as close to perfect as possible.

    Brad McCredie, an IBM fellow and vice president of IBM Power systems development

    This is not your usual run of software running to catch up with CPU/GPU upgrades but instead hardware being optimised to run 20 year old, or more, algorithms.
     
  5. amusicsite

    amusicsite dn ʎɐʍ sᴉɥ┴ Staff Member

    Location:
    UK
  6. amusicsite

    amusicsite dn ʎɐʍ sᴉɥ┴ Staff Member

    Location:
    UK
    https://phys.org/news/2019-06-chip-energy.html

    Yet another breakthrough that could help make optical computers more of a reality, at least in the server space. Their research predicts they can reduce the power needed and make smaller chips that do the heavy lifting of neural networks. We will see if it works outside of the simulator but it does seem like we are getting closer to someone working out a viable computer which uses light rather than electrons.