CS150

Description

Flashcards on CS150, created by alanischelsea on 24/05/2015.
alanischelsea
Flashcards by alanischelsea, updated more than 1 year ago
alanischelsea
Created by alanischelsea almost 9 years ago
35
0

Resource summary

Question Answer
conception about bigger space before? more advisable to have bigger space before because that meant you can put more. but that concept was changed.
What is parallel programming? small weak computers can be configured with processors/special software that would improve their performance; they are grped together as clusters/grid; massively parallel system
types of multiprocessors? mainframe and server
mainframe? perform concurrent simple tasks
supercomputer? aimed to solve few computationally intensive tasks
what make supercomp and mainframe similar? both have many processors. They can do what each other can do provided that they are given the same software. Their performance depends on the software installed to them.
Changes in technology Performance- must double every 1.5 years Cost- depreciation; people are smarter now in manufacturing; "learning curve" Function- ie. phone that can be a camera, video recorder blabla
perception on size of a device - size as dictate of design not performance - small/large device can be weak/powerful - performance not based on size anymore
graphical representation of the relationship between learning and experience learning curve
Moore's Law performance doubles every 1.5 years
Why does Moore's Law's meaning never change? before, the number of transistors are parallel to the performance.
What is a transistor? a microscopic switch with on/off states; allow/not allow electricity to pass through; more transistors mean more ability to answer
What does a computer system do? it breaks down complex questions into simple ones that can be answered by yes/no (transistor's task) ; system is not smart, but fast because you have to explicitly state its course of decisions. Unlike for man, you can come into conclusion w/o explicitly saying the steps you took to arrive at such conclusion.
Why does tech improve exponentially? because of other technology; the improvement of tech is brought about by an existing one; ex. iphone from ipod + phone; not really new in terms of products, but new in the sense of procedures
transistor density? die size? disk? - how many transistors put together - square that transforms itself into CPU - optimal size that we can handle
task of computer architecture & factors to consider when recommending - be able to recommend computer/needs - Application: saan gagamitin yung computer? - History: what worked/didn't work in the company - Prog Language- C, C#, Java ba? Technology- web-based ba or......... - budget!!!! - maximize performance
yield? percentage of good products over total products; increase in yield means increase in LC which affects the cost
formula for cost of die = wafer cost / (dies per wafer * die yield)
IC cost formula (die cost + packaging cost + testing cost) / (final test yield)
cost when finding out the good/bad die wafers testing cost
2000 die wafers, 50% die yield, 25% IC yield answer? = 2000 * 0.5 * 0.25 = 250 IC
die in the beginning.............. has no circuit then the good ones are transformed into IC
direct cost? gross margin? ave discount? - coming together of components; labor - aka indirect cost; R&D, advertising cost - profit; ito yung range na binabawasan ng retailers
time it takes to finish one task latency/execution time/response time
how many tasks that can be performed at a given time throughput/bandwidth; measured as rate
execution time & performance?? inversely proportional - if lower exec time, better performance
Amdahl's Law improving 1 computer means improving the part/component that you use the most in order to feel the change
Fraction enhanced and Speedup enhanced - how often is the component used? ; should be less than or equal to 1 (percentage) - how much faster is the new improvement? ; greater than 1; if equal to 1, then there's no improvement at all
formula of amdahl's law extime old / extime new = 1 / ((1-fe) + (fe/se)) - no unit; "___ times better than old performance"
CPU clock - have a particular cycle that dictates computer operations - comp operations are implemented in accordance with the rate of the cycle
What is a cycle? the interval between the start of the burst and the start of the next burst ; electricity is present during burst; measured by Hertz; 3 cycles = 3 Hertz
CPU and floating numbers - floating number: ridiculous number of decimal pts - CPU that can compute floating pt operations is more ideal/more precise - such performance is significant when computing pixels for gaming
Locality of reference - CPU: the brain - main memory or RAM: primary memory; what we upgrade is really the RAM - secondary memory: large memory space that is relatively slow; hard drive; storage
interaction among CPU, RAM and HD Analogy - the secondary memory is like a cabinet with all files; RAM is the study table that contains address; CPU gets the data from memory then works on the RAM or respective address - highest priority is given to the data you use now
temporal locality? spatial locality? - across time, there's data with high probability of referencing; for example, alam na ng waiter yung i-oorder mo kung yun yung lagi mong inoorder - more on space; high probability of referencing of data that are juxtaposed with each other
- what is a benchmark? - types of benchmark? - benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it. --- types--- - synthetic: fake program, can be cheated, artificially changing results to match with benchmarks - Toy: literally toy; hindi gaanong ginagamit - kernel: "isolated performance"; come from real program; real prog into pieces
Computer hardware - what are the parts of the computer? - processor: CU + datapath - Memory: Main memory / secondary memory - Input/Output
what is a control unit? what is a datapath? CPU- has control unit inside control unit- issues signals to the rest of the CPU - datapath is a collection of functional units, as arithmetic logic units or multipliers, that perform data processing operations, registers, and buses. - all other parts won't function unless given a signal by the CU
Von Neumann Machine vs Harvard Archi VN - applied the concept of alan turing; programs are executed sequentially; parameters can be changed; bottleneck H.A.- made 4 pathways for data and instruction & their respective addresses; solved the bottleneck; can be accessed simultaneously http://v-codes.blogspot.com/2010/09/difference-between-harward-and-von.html
What is the memory hierarchy - registry is the fastest followed by cache, memory, secondary memory - access time is the time it takes to locate the data and give data - smaller size means better performance kasi mas konti yung space na hahanapan - tapes are the slowest because it's sequential; longer process
- boss inside the CPU - fed with instructions - without its signals, other parts of CPU will not function CONTROL UNIT
parts of this have specific roles that are non-flexible; one primary example is register; general purpose- can hold general data and special purpose- limited to data specified to it (adobo analogy) DATAPATH
higher vs lower programming language HIGH LEVEL PL - closer representation to the user; compiler tells RAM to make a space for 'x';various addresses contain x's; compiler binds value of x in the RAM's address - garbage collection in higher level PL - higher PL = more generic - ease of implementation advantage LOW LEVEL PL - no garbage collection; explicitly states the need to have one - closer to the machine; ex: C prog lang - greater hardware detail = low level - therefore, computer archi is more on low level - control address of the hardware advantage - more connected to hardware so you can specify the address
it performs the mapping - sets the representation - allows the operation to happen instruction set architecture
instruction cycle Fetch instruction Decode instruction Operand Fetch Execution Store- store in the left operand
memory address register memory data register instruction register - contains the address to which the data is fetched from - temporarily stores the data after it was fetched - where the instructions permanently resides
signal issued by CU to fetch the instruction MEM read
difference between risc and cisc complex instruction set computers - new processors are more powerful over older models to maintain compatibility - use of microcode - richer/more complex instruction sets simplify compiler design - more powerful instruction means fewer instructions needed - the more complex, the bigger the required control store is Reduced ISC - perform simpler ops and use simple addressing modes - complex ops can be done by using several simpler ops - 80/20 rule: if most used instructions was sped up, then the performance would be greater - basic hardware ; easier to compile
set of machine level instructions for a particular type of computer or hardware instruction set
the sequence of ops involved in processing an instruction instruction cycle
instruction cycle..... fetch instruction decode instruction operand fetch execution store
this happens when the memory informs the cpu that it has completed read/write request; memory sends mfc - memory function complete signal - situation in which memory & cpu are synch cpu-memory handshake
types of control - hardwired - microprogrammed
considerations to hardwired control - fast: everything done thru hardware - permanent: difficult to modify - difficult to implement in large instruction set
microprogrammed consideration - flexible because u can just modify the control store if needed - slower and may require more hardware due to increasing number of instructions
considerations on microprogrammed - flexible because u can just modify the control store if needed - slower and may require more hardware due to increasing number of instructions
areas of devt for cpu design system bus speed internal and external clock freq casing cooling system instruction set wire material
what's the end result of these areas of devt in cpu design? enhance the speed of the cpu and system in general
conduit for moving data between the processor and other sys components system bus main memory --> mar MM --> mdr MM --> control
speed of data processing inside the cpu internal clock freq
speed of data transfer to and from the cpu via the system bus external clock freq
faster? internal or external? internal!!!!!!!!!!!!!!! external * multiplier = internal
how long will moore's law last? 40 years!!!
what's the purpose of micron tech? what is it in the first place? - 1 millionth of a meter - objective is to have thinner wires - allow cpu to operate at lower voltage - generating less heat and operating at higher speeds
material used for wire copper - from MHz to GHz real quick
what about this system on chip? - many components bundled in one chip this results to replacing dozen or separate chips
impact of system on a chip - longer battery life - less heat dissipation - smaller devices
fc-lga - for casing of the IC - lower voltage used and less heat dissipation - the pins are found in the socket and not in the chip itself
going beyond the recommended clock frequency; done by - system bus freq - cpu freq multiplier - change both overclocking
exceeds the normal speeds depending on processing reqs dynamic overclocking
cooling sys make sure that cpu won't overheat as it tends to get hotter as it gets faster
multimedia processing - gpu - cpu that can compute floating pts operations faster is more ideal - kung mabilis yung pixel calculation, mas okay daw for gaming
how to handle multimedia - speed up cpu - use high-end 3d graphics cards - pipeline implementation, multithreading (multiple processes at the same time) - multimedia specific instructions
single-cpu innovation - internal: how many bits can the cpu process simultaneously? - external: how many bits can the cpu receive simultaneously for processing?
superscalar Superscalar processor executes multiple independent instructions in parallel. • Common instructions (arithmetic, load/store etc) can be initiated simultaneously and executed independently
current innovation today multicore multithreading
intel cpu lineup pentium celeron atom core i3 5 7 xeon itanium
amd cpu lineup desktop - amd fx - phenom - athlon - sempron tablet - a series server - opteron
combining several components in a single chip accelerated processing unit cpu gpu memory controller video encoder/decoder
intel cpu innovation - hyperthreading - multicore - quickpath interconnect
amd cpu innovation advanced micro devices hypertransport multicore direct connect archi
POWER7+ 8 cores 4.42 Ghz 4 threads per core 32 nm process introduced TrueNorth- prototype chip that mimics 1 million human neurons and 256 million synapses ibm
SPARC M6 Processor -12 cores 8 threads per core 3.6 Ghz 48 MB L3 cache 28 nm process ORACLE
Show full summary Hide full summary

Similar

Introduction to the Internet
Shannon Anderson-Rush
Physiology / Intro psychology
Molly Macgregor
Welcome to GoConqr!
Sarah Egan
Psychology 115 Final Exam Review
HighBounce
Introduction to Microeconomics and The Economy
BryanTurner
Chapter 1 - The Accounting Environment
BryanTurner
Who did what now?...Ancient Greek edition
Chris Clark
Intro to Sociology
Shelby Allen
MGT 349-Management a Practical Introduction Chapter 16
Sheila Holdren
Social Theory: An Introduction
hayleylawrence
computer architecture acronyms
Tolu Awosanya