new icn messageflickr-free-ic3d pan white
View allAll Photos Tagged Windows+Security+Essentials

Viadukt (der, auch das Viadukt; Schweiz, Österreich: das Viadukt) kommt aus dem Lateinischen (via = Weg + ducere = führen; PPP ductum) und bedeutet Wegleitung oder Wegführung oder sehr frei übersetzt Trasse. Als Viadukt werden auch mehr oder minder hohe und lange Straßenbrücken oder Brücken für Eisenbahnen bezeichnet, die steigungsarm ein Tal oder eine Senke mit Pfeilern und oft Bögen überspannen.

  

Bereits im Altertum, vor allem bei den antiken Römern, finden sich zahlreiche Viadukte. Aber erst mit der Entstehung der Eisenbahnen um 1830 setzte wieder verstärkt der Bau und Gebrauch dieser Bauwerke ein. Neben den bedeutenden, auf einer Höhe verlaufenden Aquädukten, gibt es noch die gewölbten Viadukte in der pränestinischen Heerstraße zwischen Rom und Gabii mit Halbkreisgewölben und Pfeilern aus Tuffquadern sowie die der Appischen Heerstraße bei Aricia. Der südfranzösische Pont Serme erreichte eine beachtliche Länge von 1500 Metern.

Es gibt keine allgemeingültige Definition des Begriffes Viadukt. Jeder Viadukt ist auch eine Brücke, und wird aus bautechnischer Sicht auch zusammen mit Brücken in dieselben Kategorien eingeteilt (Bogenbrücken, Balkenbrücken usw.). Der Begriff Viadukt hat mehr mit der Wirkung auf die Umgebung und mit seiner Funktion zu tun, bedeutende Verkehrswege möglichst umwegs- und steigungsarm zu führen. Ein Viadukt überquert nicht nur, er verbindet auch. Deshalb ist es meist von der lokalen Gegebenheiten abhängig, ab wann eine Brücke als Viadukt bezeichnet wird. In der Regel werden mehrfeldrige Brücken, die mehrheitlich über ein Gewässer führen, als Brücke und nicht als Viadukt bezeichnet. Ein Viadukt überquert also mehrheitlich Land, und könnte theoretisch – zumindest teilweise – durch einen Damm ersetzt werden.[2]

  

Ein Viadukt wird in der Regel von keinem Hauptbogen bestimmt, sondern besteht aus mehreren meist gleichmäßigen Bögen oder Öffnungen. Selbst wenn es eine Hauptöffung hat, macht diese nur einen kleinen Teil der Gesamtlänge des Viaduktes aus. Sehr häufig verwendet man die Bezeichnung Viadukt für ein Brückenbauwerk, das aus mehreren direkt aneinander gebauten Brücken besteht. So besteht beispielsweise das Lorraineviadukt aus vier hintereinander folgenden Brücken.

  

Gemäß Duden ist der Begriff Viadukt auch ein Synonym für Talbrücke und Überführung.

Viadukte werden aus Stein, Ziegeln, Beton, Eisen oder Holz gebaut. Im engeren Sinn versteht man unter Viadukt auch die kleineren Überführungen und Unterführungen von Straßen oder Eisenbahnen mit einer bis drei Öffnungen, welche überwölbt oder mit eisernen, auf steinernen Pfeilern ruhenden, massiv gewalzten oder aus Blech und Fassoneisen zusammengesetzten Trägern überspannt sind. Steinerne Viadukte haben zumeist Halbkreisgewölbe, schlanke Pfeiler und mit zunehmenden Höhen zwei, drei und vier Ebenen, die durch Zwischengewölbe gebildet werden. Entweder sind die Zwischenpfeiler gleich stark oder schwächer. Gruppenpfeiler sind dann vorhanden, wenn mehrere Zwischenpfeiler sich mit stärkeren Pfeilern abwechseln.

  

Der Viadukt von Millau wurde am 14. Dezember 2004 von Präsident Jacques Chirac eröffnet und ist eine der imposantesten Brücken der Welt: Von sieben Pfeilern getragen quert sie mit einer Länge von 2460 Metern und maximal 270 Metern Höhe als Autobahnbrücke das Tal des Tarn fünf Kilometer westlich von Millau.

Steinerne Viadukte

Die Ravennabrücke im Höllental (Schwarzwald) ist 58 m hoch und 225 m lang. Die Bogenweite der acht Bögen beträgt je 20 Meter. Der Eisenbahnviadukt wurde 1927/28 errichtet.

Der Ruhr-Viadukt bei Herdecke ist etwa 30 m hoch.

Der Ruhr-Viadukt bei Witten ist gut 800 m lang.

Der Altenbekener Viadukt wurde bereits 1853 eingeweiht.

Der Burtscheider Viadukt von 1838 bis 1840 ist eine der ältesten noch genutzten Eisenbahnbrücken Deutschlands.

Der Desenzanoviadukt bei Verona ist einstöckig und weist eine Höhe von 60 m auf.

Der Viadukt El Puente Nuevo in Ronda, Spanien, ist 120 m hoch.

Den Lockwoodviadukt in England zeichnen seine Pfeiler mit einem Schlankheitsgrad von 1/30 aus.

Der Viadukt über das Elstertal in Sachsen ist zweistöckig und weist eine Höhe von 69,75 m auf.

Der 1940 zerstörte Viadukt über das Göhltal bei Aachen war zweistöckig.

Der Viadukt von Chaumont ist dreistöckig und weist eine Höhe von 50 m auf.

Der Viadukt über das Göltzschtal bei Reichenbach im Vogtland in Sachsen ist teilweise vierstöckig, war bei ihrem Bau mit 80,37 m die höchste Eisenbahnbrücke der Welt und gilt bis heute als größte Ziegelsteinbrücke.

Einige Viadukte der Semmeringbahn weisen auch zusätzlich eine Krümmung im Grundriss auf.

Die Stadtbahnbögen entlang des Wiener Gürtels wurden als eigene Verkehrsebene für den Öffentlichen Nahverkehr errichtet. Heute hat sich in den Bögen eine rege Lokal-Szene entwickelt.

Der Himbächel-Viadukt der Odenwaldbahn.

Der Landwasserviadukt der Rhätischen Bahn.

Der Viadukt von Bolesławiec (Bunzlau) in Polen über den Bober ist 450 m lang und wurde von 1844 bis 1846 erbaut.

Der gemauerte Bogen der Salcanobrücke auf der Wocheinerbahn ist mit einer Spannweite von 85 m der größte jemals für einen Viadukt gebaute Bogen.

Über die beiden Viadukte bei Plein (Eifel) führt heute ein Radweg.

Der Hangviadukt bei Pünderich an der Mosel

Der Viadukt in Apolda ist 95 m lang, 23 m hoch und wurde am 2. Dezember 1846 fertiggestellt. Die Einweihungsfeier fand am 16. Dezember 1846 statt.

Der Bietigheimer Eisenbahnviadukt (Wahrzeichen der Stadt Bietigheim), erbaut von 1851 bis 1853 von Karl Etzel, Höhe circa 30 m, Spannweite 287 m. Er verfügt über 21 Bögen. Der Viadukt stellt die Verbindung zwischen Bietigheim-Bissingen und Bruchsal sicher.

Die zweite Lorzentobelbrücke im Kanton Zug (Schweiz) wurde 1910 als Bogenviadukt erbaut. Er hat Länge von 187 und eine maximale Höhe von 58 Metern.

Die Stadtbahntrasse in Berlin ist ein über 8 km langer Steinviadukt, der zwischen 1875 und 1882 errichtet wurde. Der Viadukt ist das längste Baudenkmal Deutschlands.[3]

Der Luxemburger Viadukt Pulvermühle wurde 1862 eingeweiht.

Der Castielertobel-Viadukt der Arosabahn von 1914 (bis 1942)

  

Eiserne Viadukte weisen meist steinerne Pfeiler auf wie der Viadukt bei Znaim oder eiserne Pfeiler auf steinernen Sockeln wie der Crumlinviadukt bei Newport in Südwales, das Saaneviadukt bei Freiburg im Üechtland, das Sitterviadukt bei St.Gallen, die Viadukte der Orleansbahn bei Baufseau d'Ahun und über die Cere, der Viadukt über die Gravine bei Castellaneta, der Pfrimmtalviadukt bei Marnheim in der Pfalz.

  

Auf der Bahnstrecke Erfurt–Ilmenau ist der eingleisige Talübergang bei Angelroda mit einem gusseisernen Viadukt errichtet worden, sowie in der Bahnstrecke Friedberg–Hanau der Viadukt über das Nidda-Tal.

  

Weitere eiserne Viadukte:

  

Castielertobel-Viadukt zwischen Calfreisen und Castiel

Firth-of-Tay-Brücke in Schottland

„Kentucky High Bridge“ der Cincinnati Southern, heute Norfolk Southern.

Portage-Viadukt der Erie Railroad, der an der Stelle eines abgebrannten hölzernen Viadukts in 86 Tagen über den Genesee River erbaut wurde.

Müngstener Brücke zwischen Remscheid und Solingen

Viadukt über das Tal der Aqua de Varrugas bei Lima in Peru mit einer Pfeilerhöhe von 76,8 m.

Fachwerkviadukte Kübelbach-, Ettenbach- und Stockerbachviadukt der Gäubahn Eutingen–Freudenstadt und das Sitterviadukt der Schweizerischen Südostbahn im Kanton St. Gallen mit ihrem markanten eisernen, halbparabligen Fachwerkträger (Fischbauchträger) gilt mit 99 m als die höchste Eisenbahnbrücke der Schweiz.

Viaduc de Millau über das Tal des Tarn (stählernes Fahrbahndeck)

Der Castielertobel-Viadukt im Schanfigg (bis 1942 Steinbogenbrücke)

  

Die Viadukte aus Holz hatten eine geringe Bedeutung und waren meist nur eine Zwischenlösung, da sie leicht durch den Funkenflug der Dampflokomotiven Feuer fingen und abbrannten. Dennoch wurden sie gebaut, da sie kostengünstig in der Errichtung waren. Als historische Beispiele können die abgebrannten Viadukte über den Genesee River bei Portage in den Vereinigten Staaten mit 57,4 m hohen Holzpfeilern und die Viadukte über die Msta in Russland mit 21,34 m hohen Holzpfeilern, beide auf gemauerten Sockeln, genannt werden.

  

Viadukte aus Stahl- und Spannbeton

Das Lehnenviadukt Beckenried in der Schweiz.

Das Neckartalviadukt bei Reutlingen (Baden-Württemberg)

Das Viadukt von Schengen ist die Überquerung der A 8 über die Mosel zwischen Perl und Schengen

Das Moselviadukt bei Vandières führt die Überquerung der französischen Schnellfahrstrecke LGV Est européenne Paris-Straßburg über die Mosel

Der Viadukt von Millau (auch Viaduc de Millau) über die französische Tarnschlucht ist die höchste Autobahnbrücke der Welt.

Der Langwieser Viadukt und der Gründjitobel-Viadukt bei Langwies waren bei ihrer Eröffnung 1914 die größten Stahlbeton-Eisenbahnbrücken der Welt

Der Schildescher Viadukt in Bielefeld.

Kreisviadukt

Eine besondere Form des Viadukts ist das Kreis- oder Kreiskehrviadukt. Es bewältigt ähnlich einem Kreiskehrtunnel einen Höhenunterschied, wobei die Höhendifferenz im Freien (auf dem Viadukt) und nicht im Berg überwunden wird. Das berühmteste Kreiskehrviadukt findet sich bei der Berninabahn in Brusio.

Ein Hangviadukt schafft in erster Linie eine (ggfs. schiefe) Ebene an einem Berghang, auf der ein Verkehrsweg errichtet werden kann. Eventuelle Einschnitte an der Hangflanke werden hier eher „nebenbei“ überbrückt. Ein bekannter Hangviadukt in Deutschland befindet sich bei Pünderich an der Mosel. Über ihn verläuft die Trasse der Moselstrecke.

  

A viaduct is a bridge composed of several small spans[1] for crossing a valley or a gorge.The term viaduct is derived from the Latin via for road and ducere, to lead. However, the ancient Romans did not use the term; it is a modern derivation from an analogy with aqueduct.[4] Like the Roman aqueducts, many early viaducts comprised a series of arches of roughly equal length. Viaducts may span land or water or both.

  

The longest viaduct in antiquity may have been the Pont Serme which crossed wide marshes in southern France.[6] In Romance languages, the word viaduct refers to a bridge which spans only land. A bridge spanning water is called ponte.

  

Over land

Viaducts are commonly used in many cities that are railroad centers, such as Chicago, Atlanta, Birmingham, London, and Manchester. These viaducts cross the large railroad yards that are needed for freight trains there, and also cross the multi-track railroad lines that are needed for heavy railroad traffic. These viaducts keep highway and city street traffic from having to be continually interrupted by the train traffic. Likewise, some viaducts carry railroads over large valleys, or they carry railroads over cities with many cross-streets and avenues.

  

Many viaducts over land connect points of similar height in a landscape, usually by bridging a river valley or other eroded opening in an otherwise flat area. Often such valleys had roads descending either side (with a small bridge over the river, where necessary) that become inadequate for the traffic load, necessitating a viaduct for "through" traffic.[7] Such bridges also lend themselves for use by rail traffic, which requires straighter and flatter routes.[8] Some viaducts have more than one deck, such that one deck has vehicular traffic and another deck having rail traffic. One example of this is the Prince Edward Viaduct in Toronto, Canada, that carries motor traffic on the top deck as Bloor Street, and metro as the Bloor-Danforth subway line on the lower deck, over the steep Don River valley. Others were built to span settled areas and crossed over roads beneath - the reason for many viaducts in London.

  

Over water

Viaducts over water are often combined with other types of bridges or tunnels to cross navigable waters. The viaduct sections, while less expensive to design and build than tunnels or bridges with larger spans, typically lack sufficient horizontal and vertical clearance for large ships. See the Chesapeake Bay Bridge-Tunnel.

  

The Millau Viaduct is a cable-stayed road-bridge that spans the valley of the River Tarn near Millau in southern France. Designed by the French bridge engineer Michel Virlogeux, in collaboration with architect Norman Robert Foster, it is the tallest vehicular bridge in the world, with one pier's summit at 343 metres (1,125 ft)—slightly taller than the Eiffel Tower and only 38 m (125 ft) shorter than the Empire State Building. It was formally dedicated on 14 December 2004 and opened to traffic two days later. The viaduct Danyang–Kunshan Grand Bridge in China is the longest bridge in the world according to Guinness World Records as of 2011.

  

Land use below viaducts

  

Where a viaduct is built across land rather than water, the space below the arches may be used for businesses such as car parking, vehicle repairs, light industry, bars and nightclubs. In the United Kingdom, many railway lines in urban areas have been constructed on viaducts, and so the infrastructure owner Network Rail has an extensive property portfolio in arches under viaducts.[10]

  

Past and future[edit]

Elevated expressways were built in rich cities such as Boston (Central Artery), Seoul, Tokyo, Toronto (Gardiner Expressway).[11] Some were demolished because they were ugly and divided the city.[citation needed] However in developing nations such as Thailand, India (Delhi-Gurgaon Expressway), China, Bangladesh, Pakistan, elevated expressways have been built and more are under construction to improve traffic flow, particularly as a workaround of land shortage when built atop surface roads.[citation needed] In Indonesia viaducts are used for railways in Java and also for highways such as the Jakarta Inner Ring Road.

  

Quelle:

en.wikipedia.org/wiki/Viaduct

de.wikipedia.org/wiki/Viadukt

 

Fotografie oder Photographie (aus griechisch φῶς, phos, im Genitiv: φωτός, photos, „Licht (der Himmelskörper)“, „Helligkeit“ und γράφειν, graphein, „zeichnen“, „ritzen“, „malen“, „schreiben“) bezeichnet

  

eine bildgebende Methode,[1] bei der mit Hilfe von optischen Verfahren ein Lichtbild auf ein lichtempfindliches Medium projiziert und dort direkt und dauerhaft gespeichert (analoges Verfahren) oder in elektronische Daten gewandelt und gespeichert wird (digitales Verfahren).

das dauerhafte Lichtbild (Diapositiv, Filmbild oder Papierbild; kurz Bild, umgangssprachlich auch Foto genannt), das durch fotografische Verfahren hergestellt wird; dabei kann es sich entweder um ein Positiv oder ein Negativ auf Film, Folie, Papier oder anderen fotografischen Trägern handeln. Fotografische Aufnahmen werden als Abzug, Vergrößerung, Filmkopie oder als Ausbelichtung bzw. Druck von digitalen Bild-Dateien vervielfältigt. Der entsprechende Beruf ist der Fotograf.

Bilder, die für das Kino aufgenommen werden. Beliebig viele fotografische Bilder werden in Reihen von Einzelbildern auf Film aufgenommen, die später mit einem Filmprojektor als bewegte Bilder (Laufbilder) vorgeführt werden können (siehe Film).

  

Der Begriff Photographie wurde erstmals (noch vor englischen oder französischen Veröffentlichungen) am 25. Februar 1839 vom Astronomen Johann Heinrich von Mädler in der Vossischen Zeitung verwendet.[2] Bis ins 20. Jahrhundert bezeichnete Fotografie alle Bilder, welche rein durch Licht auf einer chemisch behandelten Oberfläche entstehen. Mit der deutschen Rechtschreibreform 1901 wurde die Schreibweise „Fotografie“ empfohlen, was sich jedoch bis heute nicht ganz durchsetzen konnte. Gemischte Schreibungen wie „Fotographie“ oder „Photografie“ sowie daraus abgewandelte Adjektive oder Substantive waren jedoch zu jeder Zeit eine falsche Schreibweise.

  

Allgemeines

Die Fotografie ist ein Medium, das in sehr verschiedenen Zusammenhängen eingesetzt wird. Fotografische Abbildungen können beispielsweise Gegenstände mit primär künstlerischem (künstlerische Fotografie) oder primär kommerziellem Charakter sein (Industriefotografie, Werbe- und Modefotografie). Die Fotografie kann unter künstlerischen, technischen (Fototechnik), ökonomischen (Fotowirtschaft) und gesellschaftlich-sozialen (Amateur-, Arbeiter- und Dokumentarfotografie) Aspekten betrachtet werden. Des Weiteren werden Fotografien im Journalismus und in der Medizin verwendet.

  

Die Fotografie ist teilweise ein Gegenstand der Forschung und Lehre in der Kunstgeschichte und der noch jungen Bildwissenschaft. Der mögliche Kunstcharakter der Fotografie war lange Zeit umstritten, ist jedoch seit der fotografischen Stilrichtung des Pictorialismus um die Wende zum 20. Jahrhundert letztlich nicht mehr bestritten. Einige Forschungsrichtungen ordnen die Fotografie der Medien- oder Kommunikationswissenschaft zu, auch diese Zuordnung ist umstritten.

  

Im Zuge der technologischen Weiterentwicklung fand zu Beginn des 21. Jahrhunderts allmählich der Wandel von der klassischen analogen (Silber-)Fotografie hin zur Digitalfotografie statt. Der weltweite Zusammenbruch der damit in Zusammenhang stehenden Industrie für analoge Kameras aber auch für Verbrauchsmaterialien (Filme, Fotopapier, Fotochemie, Laborgeräte) führt dazu, dass die Fotografie mehr und mehr auch unter kulturwissenschaftlicher und kulturhistorischer Sicht erforscht wird. Allgemein kulturelle Aspekte in der Forschung sind z.B. Betrachtungen über den Erhalt und die Dokumentation der praktischen Kenntnis der fotografischen Verfahren für Aufnahme und Verarbeitung aber auch der Wandel im Umgang mit der Fotografie im Alltag. Zunehmend kulturhistorisch interessant werden die Archivierungs- und Erhaltungstechniken für analoge Aufnahmen aber auch die systemunabhängige langfristige digitale Datenspeicherung.

  

Die Fotografie unterliegt dem komplexen und vielschichtigen Fotorecht; bei der Nutzung von vorhandenen Fotografien sind die Bildrechte zu beachten.

  

Fototechnik

Prinzipiell wird meist mit Hilfe eines optischen Systems, in vielen Fällen einem Objektiv, fotografiert. Dieses wirft das von einem Objekt ausgesendete oder reflektierte Licht auf die lichtempfindliche Schicht einer Fotoplatte, eines Films oder auf einen fotoelektrischen Wandler, einen Bildsensor.

  

→ Hauptartikel: Fototechnik

Fotografische Kameras

→ Hauptartikel: Kamera

Der fotografischen Aufnahme dient eine fotografische Apparatur (Kamera). Durch Manipulation des optischen Systems (unter anderem die Einstellung der Blende, Scharfstellung, Farbfilterung, die Wahl der Belichtungszeit, der Objektivbrennweite, der Beleuchtung und nicht zuletzt des Aufnahmematerials) stehen dem Fotografen oder Kameramann zahlreiche Gestaltungsmöglichkeiten offen. Als vielseitigste Fotoapparatbauform hat sich sowohl im Analog- als auch im Digitalbereich die Spiegelreflexkamera durchgesetzt. Für viele Aufgaben werden weiterhin die verschiedensten Spezialkameras benötigt und eingesetzt.

  

Lichtempfindliche Schicht

Bei der filmbasierten Fotografie (z. B. Silber-Fotografie) ist die lichtempfindliche Schicht auf der Bildebene eine Dispersion (im allgemeinen Sprachgebrauch Emulsion). Sie besteht aus einem Gel, in dem gleichmäßig kleine Körnchen eines Silberhalogenids (zum Beispiel Silberbromid) verteilt sind. Je kleiner die Körnung ist, umso weniger lichtempfindlich ist die Schicht (siehe ISO-5800-Standard), umso besser ist allerdings die Auflösung („Korn“). Dieser lichtempfindlichen Schicht wird durch einen Träger Stabilität verliehen. Trägermaterialien sind Zelluloseacetat, früher diente dazu Zellulosenitrat (Zelluloid), Kunststofffolien, Metallplatten, Glasplatten und sogar Textilien (siehe Fotoplatte und Film).

  

Bei der Digitalfotografie besteht das Äquivalent der lichtempfindlichen Schicht aus Chips wie CCD- oder CMOS-Sensoren.

  

Entwicklung und Fixierung

Durch das Entwickeln bei der filmbasierten Fotografie wird auf chemischem Wege das latente Bild sichtbar gemacht. Beim Fixieren werden die nicht belichteten Silberhalogenid-Körnchen wasserlöslich gemacht und anschließend mit Wasser herausgewaschen, sodass ein Bild bei Tageslicht betrachtet werden kann, ohne dass es nachdunkelt.

  

Ein weiteres älteres Verfahren ist das Staubverfahren, mit dem sich einbrennbare Bilder auf Glas und Porzellan herstellen lassen.

  

Ein digitales Bild muss nicht entwickelt werden; es wird elektronisch gespeichert und kann anschließend mit der elektronischen Bildbearbeitung am Computer bearbeitet und bei Bedarf auf Fotopapier ausbelichtet oder beispielsweise mit einem Tintenstrahldrucker ausgedruckt werden. Bei der Weiterverarbeitung von Rohdaten spricht man auch hier von Entwicklung.

  

Der Abzug

Als Abzug bezeichnet man das Ergebnis einer Kontaktkopie, einer Vergrößerung, oder einer Ausbelichtung; dabei entsteht in der Regel ein Papierbild. Abzüge können von Filmen (Negativ oder Dia) oder von Dateien gefertigt werden.

  

Abzüge als Kontaktkopie haben dieselbe Größe wie die Abmessungen des Aufnahmeformats; wird eine Vergrößerung vom Negativ oder Positiv angefertigt, beträgt die Größe des entstehenden Bildes ein Vielfaches der Größe der Vorlage, dabei wird jedoch in der Regel das Seitenverhältnis beibehalten, das bei der klassischen Fotografie bei 1,5 bzw. 3:2 oder in USA 4:5 liegt.

Eine Ausnahme davon stellt die Ausschnittvergrößerung dar, deren Seitenverhältnis in der Bühne eines Vergrößerers beliebig festgelegt werden kann; allerdings wird auch die Ausschnittvergrößerung in der Regel auf ein Papierformat mit bestimmten Abmessungen belichtet.

  

Der Abzug ist eine häufig gewählte Präsentationsform der Amateurfotografie, die in speziellen Kassetten oder Alben gesammelt werden. Bei der Präsentationsform der Diaprojektion arbeitet man in der Regel mit dem Original-Diapositiv, also einem Unikat, während es sich bei Abzügen immer um Kopien handelt.

  

Geschichte der Fotografie

→ Hauptartikel: Geschichte und Entwicklung der Fotografie

Vorläufer und Vorgeschichte[Bearbeiten]

Der Name Kamera leitet sich vom Vorläufer der Fotografie, der Camera obscura („Dunkle Kammer“) ab, die bereits seit dem 11. Jahrhundert bekannt ist und Ende des 13. Jahrhunderts von Astronomen zur Sonnenbeobachtung eingesetzt wurde. Anstelle einer Linse weist diese Kamera nur ein kleines Loch auf, durch das die Lichtstrahlen auf eine Projektionsfläche fallen, von der das auf dem Kopf stehende, seitenverkehrte Bild abgezeichnet werden kann. In Edinburgh und Greenwich bei London sind begehbare, raumgroße Camerae obscurae eine Touristenattraktion. Auch das Deutsche Filmmuseum hat eine Camera obscura, in der ein Bild des gegenüberliegenden Mainufers projiziert wird.

  

Ein Durchbruch ist 1550 die Wiedererfindung der Linse, mit der hellere und gleichzeitig schärfere Bilder erzeugt werden können. 1685: Ablenkspiegel, ein Abbild kann so auf Papier gezeichnet werden.

  

Im 18. Jahrhundert kamen die Laterna magica, das Panorama und das Diorama auf. Chemiker wie Humphry Davy begannen bereits, lichtempfindliche Stoffe zu untersuchen und nach Fixiermitteln zu suchen.

  

Die frühen Verfahren

Die vermutlich erste Fotografie der Welt wurde im Frühherbst 1826 durch Joseph Nicéphore Nièpce im Heliografie-Verfahren angefertigt. 1837 benutzte Louis Jacques Mandé Daguerre ein besseres Verfahren, das auf der Entwicklung der Fotos mit Hilfe von Quecksilber-Dämpfen und anschließender Fixierung in einer heißen Kochsalzlösung oder einer normal temperierten Natriumthiosulfatlösung beruhte. Die auf diese Weise hergestellten Bilder, allesamt Unikate auf versilberten Kupferplatten, wurden als Daguerreotypien bezeichnet. Bereits 1835 hatte der Engländer William Fox Talbot das Negativ-Positiv-Verfahren erfunden. Auch heute werden noch manche der historischen Verfahren als Edeldruckverfahren in der Bildenden Kunst und künstlerischen Fotografie verwendet.

  

Im Jahr 1883 erschien in der bedeutenden Leipziger Wochenzeitschrift Illustrirte Zeitung zum ersten Mal in einer deutschen Publikation ein gerastertes Foto in Form einer Autotypie, einer um 1880 erfolgten Erfindung von Georg Meisenbach.

  

20. Jahrhundert

Fotografien konnten zunächst nur als Unikate hergestellt werden, mit der Einführung des Negativ-Positiv-Verfahrens war eine Vervielfältigung im Kontaktverfahren möglich. Die Größe des fertigen Fotos entsprach in beiden Fällen dem Aufnahmeformat, was sehr große, unhandliche Kameras erforderte. Mit dem Rollfilm und insbesondere der von Oskar Barnack bei den Leitz Werken entwickelten und 1924 eingeführten Kleinbildkamera, die den herkömmlichen 35-mm-Kinofilm verwendete, entstanden völlig neue Möglichkeiten für eine mobile, schnelle Fotografie. Obwohl, durch das kleine Format bedingt, zusätzliche Geräte zur Vergrößerung erforderlich wurden, und die Bildqualität mit den großen Formaten bei Weitem nicht mithalten konnte, setzte sich das Kleinbild in den meisten Bereichen der Fotografie als Standardformat durch.

  

Analogfotografie

→ Hauptartikel: Analogfotografie

Begriff

Zur Abgrenzung gegenüber den neuen fotografischen Verfahren der Digitalfotografie tauchte zu Beginn des 21. Jahrhunderts[3] der Begriff Analogfotografie oder stattdessen auch die zu diesem Zeitpunkt bereits veraltete Schreibweise Photographie wieder auf.

  

Um der Öffentlichkeit ab 1990 die seinerzeit neue Technologie der digitalen Speicherung von Bilddateien zu erklären, verglich man sie in einigen Publikationen technisch mit der bis dahin verwendeten analogen Bildspeicherung der Still-Video-Kamera. Durch Übersetzungsfehler und Fehlinterpretationen, sowie durch den bis dahin noch allgemein vorherrschenden Mangel an technischem Verständnis über die digitale Kameratechnik, bezeichneten einige Journalisten danach irrtümlich auch die bisherigen klassischen Film-basierten Kamerasysteme als Analogkameras[4][5].

  

Der Begriff hat sich bis heute erhalten und bezeichnet nun fälschlich nicht mehr die Fotografie mittels analoger Speichertechnik in den ersten digitalen Still-Video-Kameras, sondern nur noch die Technik der Film-basierten Fotografie. Bei dieser wird aber weder digital noch analog 'gespeichert', sondern chemisch/physikalisch fixiert.

  

Allgemeines

Eine Fotografie kann weder analog noch digital sein. Lediglich die Bildinformation kann punktuell mittels physikalischer, analog messbarer Signale (Densitometrie, Spektroskopie) bestimmt und gegebenenfalls nachträglich digitalisiert werden.

  

Nach der Belichtung des Films liegt die Bildinformation zunächst nur latent vor. Gespeichert wird diese Information nicht in der Analogkamera sondern erst bei der Entwicklung des Films mittels chemischer Reaktion in einer dreidimensionalen Gelatineschicht (Film hat mehrere übereinander liegende Sensibilisierungsschichten). Die Bildinformation liegt danach auf dem ursprünglichen Aufnahmemedium (Diapositiv oder Negativ) unmittelbar vor. Sie ist ohne weitere Hilfsmittel als Fotografie (Unikat) in Form von entwickelten Silberhalogeniden bzw. Farbkupplern sichtbar. Gegebenenfalls kann aus solchen Fotografien in einem zweiten chemischen Prozess im Fotolabor ein Papierbild erzeugt werden, bzw. kann dies nun auch durch Einscannen und Ausdrucken erfolgen.

  

Bei der digitalen Speicherung werden die analogen Signale aus dem Kamerasensor in einer zweiten Stufe digitalisiert und werden damit elektronisch interpretier- und weiterverarbeitbar. Die digitale Bildspeicherung mittels Analog-Digital-Wandler nach Auslesen aus dem Chip der Digitalkamera arbeitet (vereinfacht) mit einer lediglich zweidimensional erzeugten digitalen Interpretation der analogen Bildinformation und erzeugt eine beliebig oft (praktisch verlustfrei) kopierbare Datei in Form von differentiell ermittelten digitalen Absolutwerten. Diese Dateien werden unmittelbar nach der Aufnahme innerhalb der Kamera in Speicherkarten abgelegt. Mittels geeigneter Bildbearbeitungssoftware können diese Dateien danach ausgelesen, weiter verarbeitet und auf einem Monitor oder Drucker als sichtbare Fotografie ausgegeben werden.

  

Digitalfotografie

  

Die erste CCD (Charge-coupled Device) Still-Video-Kamera wurde 1970 von Bell konstruiert und 1972 meldet Texas Instruments das erste Patent auf eine filmlose Kamera an, welche einen Fernsehbildschirm als Sucher verwendet.

  

1973 produzierte Fairchild Imaging das erste kommerzielle CCD mit einer Auflösung von 100 × 100 Pixel.

  

Dieses CCD wurde 1975 in der ersten funktionstüchtigen digitalen Kamera von Kodak benutzt. Entwickelt hat sie der Erfinder Steven Sasson. Diese Kamera wog 3,6 Kilogramm, war größer als ein Toaster und benötigte noch 23 Sekunden, um ein Schwarz-Weiß-Bild mit 100x100 Pixeln Auflösung auf eine digitale Magnetbandkassette zu übertragen; um das Bild auf einem Bildschirm sichtbar zu machen, bedurfte es weiterer 23 Sekunden.

  

1986 stellte Canon mit der RC-701 die erste kommerziell erhältliche Still-Video-Kamera mit magnetischer Aufzeichnung der Bilddaten vor, Minolta präsentierte den Still Video Back SB-90/SB-90S für die Minolta 9000; durch Austausch der Rückwand der Kleinbild-Spiegelreflexkamera wurde aus der Minolta 9000 eine digitale Spiegelreflexkamera; gespeichert wurden die Bilddaten auf 2-Zoll-Disketten.

  

1987 folgten weitere Modelle der RC-Serie von Canon sowie digitale Kameras von Fujifilm (ES-1), Konica (KC-400) und Sony (MVC-A7AF). 1988 folgte Nikon mit der QV-1000C und 1990 sowie 1991 Kodak mit dem DCS (Digital Camera System) sowie Rollei mit dem Digital Scan Pack. Ab Anfang der 1990er Jahre kann die Digitalfotografie im kommerziellen Bildproduktionsbereich als eingeführt betrachtet werden.

  

Die digitale Fotografie revolutionierte die Möglichkeiten der digitalen Kunst, erleichtert insbesondere aber auch Fotomanipulationen.

  

Die Photokina 2006 zeigt, dass die Zeit der filmbasierten Kamera endgültig vorbei ist.[6] Im Jahr 2007 sind weltweit 91 Prozent aller verkauften Fotokameras digital,[7] die herkömmliche Fotografie auf Filmen schrumpft auf Nischenbereiche zusammen. Im Jahr 2011 besaßen rund 45,4 Millionen Personen in Deutschland einen digitalen Fotoapparat im Haushalt und im gleichen Jahr wurden in Deutschland rund 8,57 Millionen Digitalkameras verkauft.[8]

  

Siehe auch: Chronologie der Fotografie und Geschichte und Entwicklung der Fotografie

Fotografie als Kunst

  

Der Kunstcharakter der Fotografie war lange Zeit umstritten; zugespitzt formuliert der Kunsttheoretiker Karl Pawek in seinem Buch „Das optische Zeitalter“ (Olten/Freiburg i. Br. 1963, S. 58): „Der Künstler erschafft die Wirklichkeit, der Fotograf sieht sie.“

  

Diese Auffassung betrachtet die Fotografie nur als ein technisches, standardisiertes Verfahren, mit dem eine Wirklichkeit auf eine objektive, quasi „natürliche“ Weise abgebildet wird, ohne das dabei gestalterische und damit künstlerische Aspekte zum Tragen kommen: „die Erfindung eines Apparates zum Zwecke der Produktion … (perspektivischer) Bilder hat ironischerweise die Überzeugung … verstärkt, dass es sich hierbei um die natürliche Repräsentationsform handele. Offenbar ist etwas natürlich, wenn wir eine Maschine bauen können, die es für uns erledigt.“[9] Fotografien dienten gleichwohl aber schon bald als Unterrichtsmittel bzw. Vorlage in der Ausbildung bildender Künstler (Études d’après nature).

  

Schon in Texten des 19. Jahrhunderts wurde aber auch bereits auf den Kunstcharakter der Fotografie hingewiesen, der mit einem ähnlichen Einsatz der Technik wie bei anderen anerkannten zeitgenössische grafische Verfahren (Aquatinta, Radierung, Lithografie, …) begründet wird. Damit wird auch die Fotografie zu einem künstlerischen Verfahren, mit dem ein Fotograf eigene Bildwirklichkeiten erschafft.[10]

  

Auch zahlreiche Maler des 19. Jahrhunderts, wie etwa Eugène Delacroix, erkannten dies und nutzten Fotografien als Mittel zur Bildfindung und Gestaltung, als künstlerisches Entwurfsinstrument für malerische Werke, allerdings weiterhin ohne ihr einen eigenständigen künstlerischen Wert zuzusprechen.

  

Der Fotograf Henri Cartier-Bresson, selbst als Maler ausgebildet, wollte die Fotografie ebenfalls nicht als Kunstform, sondern als Handwerk betrachtet wissen: „Die Fotografie ist ein Handwerk. Viele wollen daraus eine Kunst machen, aber wir sind einfach Handwerker, die ihre Arbeit gut machen müssen.“ Gleichzeitig nahm er aber für sich auch das Bildfindungskonzept des entscheidenden Augenblickes in Anspruch, das ursprünglich von Gotthold Ephraim Lessing dramenpoetologisch ausgearbeitet wurde. Damit bezieht er sich unmittelbar auf ein künstlerisches Verfahren zur Produktion von Kunstwerken. Cartier-Bressons Argumentation diente also einerseits der poetologischen Nobilitierung, andererseits der handwerklichen Immunisierung gegenüber einer Kritik, die die künstlerische Qualität seiner Werke anzweifeln könnte. So wurden gerade Cartier-Bressons Fotografien sehr früh in Museen und Kunstausstellungen gezeigt, so zum Beispiel in der MoMa-Retrospektive (1947) und der Louvre-Ausstellung (1955).

  

Fotografie wurde bereits früh als Kunst betrieben (Julia Margaret Cameron, Lewis Carroll und Oscar Gustave Rejlander in den 1860ern). Der entscheidende Schritt zur Anerkennung der Fotografie als Kunstform ist den Bemühungen von Alfred Stieglitz (1864–1946) zu verdanken, der mit seinem Magazin Camera Work den Durchbruch vorbereitete.

  

Erstmals trat die Fotografie in Deutschland in der Werkbund-Ausstellung 1929 in Stuttgart in beachtenswertem Umfang mit internationalen Künstlern wie Edward Weston, Imogen Cunningham und Man Ray an die Öffentlichkeit; spätestens seit den MoMA-Ausstellungen von Edward Steichen (The Family of Man, 1955) und John Szarkowski (1960er) ist Fotografie als Kunst von einem breiten Publikum anerkannt, wobei gleichzeitig der Trend zur Gebrauchskunst begann.

  

Im Jahr 1977 stellte die documenta 6 in Kassel erstmals als international bedeutende Ausstellung in der berühmten Abteilung Fotografie die Arbeiten von historischen und zeitgenössischen Fotografen aus der gesamten Geschichte der Fotografie in den vergleichenden Kontext zur zeitgenössischen Kunst im Zusammenhang mit den in diesem Jahr begangenen „150 Jahren Fotografie“.

  

Heute ist Fotografie als vollwertige Kunstform akzeptiert. Indikatoren dafür sind die wachsende Anzahl von Museen, Sammlungen und Forschungseinrichtungen für Fotografie, die Zunahme der Professuren für Fotografie sowie nicht zuletzt der gestiegene Wert von Fotografien in Kunstauktionen und Sammlerkreisen. Zahlreiche Gebiete haben sich entwickelt, so die Landschafts-, Akt-, Industrie-, Theaterfotografie und andere mehr, die innerhalb der Fotografie eigene Wirkungsfelder entfaltet haben. Daneben entwickelt sich die künstlerische Fotomontage zu einem der malenden Kunst gleichwertigen Kunstobjekt. Neben der steigenden Anzahl von Fotoausstellungen und deren Besucherzahlen wird die Popularität moderner Fotografie auch in den erzielten Verkaufspreisen auf Kunstauktionen sichtbar. Fünf der zehn Höchstgebote für moderne Fotografie wurden seit 2010 auf Auktionen erzielt. Die aktuell teuerste Fotografie "Rhein II" von Andreas Gursky wurde im November 2011 auf einer Kunstauktion in New York für 4,3 Millionen Dollar versteigert.[11] Neuere Diskussionen innerhalb der Foto- und Kunstwissenschaften verweisen indes auf eine zunehmende Beliebigkeit bei der Kategorisierung von Fotografie. Zunehmend werde demnach von der Kunst und ihren Institutionen absorbiert, was einst ausschließlich in die angewandten Bereiche der Fotografie gehört habe.

  

Photography (see section below for etymology) is the art, science and practice of creating durable images by recording light or other electromagnetic radiation, either chemically by means of a light-sensitive material such as photographic film, or electronically by means of an image sensor.[1] Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. The result in an electronic image sensor is an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing.

  

The result in a photographic emulsion is an invisible latent image, which is later chemically developed into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.

  

Photography has many uses for business, science, manufacturing (e.g. photolithography), art, recreational purposes, and mass communication.

  

The word "photography" was created from the Greek roots φωτός (phōtos), genitive of φῶς (phōs), "light"[2] and γραφή (graphé) "representation by means of lines" or "drawing",[3] together meaning "drawing with light".[4]

  

Several people may have coined the same new term from these roots independently. Hercules Florence, a French painter and inventor living in Campinas, Brazil, used the French form of the word, photographie, in private notes which a Brazilian photography historian believes were written in 1834.[5] Johann von Maedler, a Berlin astronomer, is credited in a 1932 German history of photography as having used it in an article published on 25 February 1839 in the German newspaper Vossische Zeitung.[6] Both of these claims are now widely reported but apparently neither has ever been independently confirmed as beyond reasonable doubt. Credit has traditionally been given to Sir John Herschel both for coining the word and for introducing it to the public. His uses of it in private correspondence prior to 25 February 1839 and at his Royal Society lecture on the subject in London on 14 March 1839 have long been amply documented and accepted as settled fact.

  

History and evolution

Precursor technologies

Photography is the result of combining several technical discoveries. Long before the first photographs were made, Chinese philosopher Mo Di and Greek mathematicians Aristotle and Euclid described a pinhole camera in the 5th and 4th centuries BCE.[8][9] In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments,[10] Ibn al-Haytham (Alhazen) (965–1040) studied the camera obscura and pinhole camera,[9][11] Albertus Magnus (1193–1280) discovered silver nitrate,[12] and Georg Fabricius (1516–71) discovered silver chloride.[13] Techniques described in the Book of Optics are capable of producing primitive photographs using medieval materials. [14][15][16]

  

Daniele Barbaro described a diaphragm in 1566.[17] Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694.[18] The fiction book Giphantie, published in 1760, by French author Tiphaigne de la Roche, described what can be interpreted as photography.[17]

  

The discovery of the camera obscura that provides an image of a scene dates back to ancient China. Leonardo da Vinci mentions natural cameras obscura that are formed by dark caves on the edge of a sunlit valley. A hole in the cave wall will act as a pinhole camera and project a laterally reversed, upside down image on a piece of paper. So the birth of photography was primarily concerned with developing a means to fix and retain the image produced by the camera obscura.

  

The first success of reproducing images without a camera occurred when Thomas Wedgwood, from the famous family of potters, obtained copies of paintings on leather using silver salts. Since he had no way of permanently fixing those reproductions (stabilizing the image by washing out the non-exposed silver salts), they would turn completely black in the light and thus had to be kept in a dark room for viewing.

  

Renaissance painters used the camera obscura which, in fact, gives the optical rendering in color that dominates Western Art. The camera obscura literally means "dark chamber" in Latin. It is a box with a hole in it which allows light to go through and create an image onto the piece of paper.

  

First camera photography (1820s)

Invented in the early decades of the 19th century, photography by means of the camera seemed able to capture more detail and information than traditional media, such as painting and sculpture.[19] Photography as a usable process goes back to the 1820s with the development of chemical photography. The first permanent photoetching was an image produced in 1822 by the French inventor Nicéphore Niépce, but it was destroyed in a later attempt to make prints from it.[7] Niépce was successful again in 1825. He made the View from the Window at Le Gras, the earliest surviving photograph from nature (i.e., of the image of a real-world scene, as formed in a camera obscura by a lens), in 1826 or 1827.[20]

  

Because Niépce's camera photographs required an extremely long exposure (at least eight hours and probably several days), he sought to greatly improve his bitumen process or replace it with one that was more practical. Working in partnership with Louis Daguerre, he developed a somewhat more sensitive process that produced visually superior results, but it still required a few hours of exposure in the camera. Niépce died in 1833 and Daguerre then redirected the experiments toward the light-sensitive silver halides, which Niépce had abandoned many years earlier because of his inability to make the images he captured with them light-fast and permanent. Daguerre's efforts culminated in what would later be named the daguerreotype process, the essential elements of which were in place in 1837. The required exposure time was measured in minutes instead of hours. Daguerre took the earliest confirmed photograph of a person in 1838 while capturing a view of a Paris street: unlike the other pedestrian and horse-drawn traffic on the busy boulevard, which appears deserted, one man having his boots polished stood sufficiently still throughout the approximately ten-minute-long exposure to be visible. Eventually, France agreed to pay Daguerre a pension for his process in exchange for the right to present his invention to the world as the gift of France, which occurred on 19 August 1839.

Meanwhile, in Brazil, Hercules Florence had already created his own process in 1832, naming it Photographie, and an English inventor, William Fox Talbot, had created another method of making a reasonably light-fast silver process image but had kept his work secret. After reading about Daguerre's invention in January 1839, Talbot published his method and set about improving on it. At first, like other pre-daguerreotype processes, Talbot's paper-based photography typically required hours-long exposures in the camera, but in 1840 he created the calotype process, with exposures comparable to the daguerreotype. In both its original and calotype forms, Talbot's process, unlike Daguerre's, created a translucent negative which could be used to print multiple positive copies, the basis of most chemical photography up to the present day. Daguerreotypes could only be replicated by rephotographing them with a camera.[21] Talbot's famous tiny paper negative of the Oriel window in Lacock Abbey, one of a number of camera photographs he made in the summer of 1835, may be the oldest camera negative in existence.[22][23]

  

John Herschel made many contributions to the new field. He invented the cyanotype process, later familiar as the "blueprint". He was the first to use the terms "photography", "negative" and "positive". He had discovered in 1819 that sodium thiosulphate was a solvent of silver halides, and in 1839 he informed Talbot (and, indirectly, Daguerre) that it could be used to "fix" silver-halide-based photographs and make them completely light-fast. He made the first glass negative in late 1839.

  

In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper.

  

Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1884, George Eastman developed an early type of film to replace photographic plates, leading to the technology used by film cameras today.

  

In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize for Physics in 1908.

  

Black-and-white

See also: Monochrome photography

All photography was originally monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost and its "classic" photographic look. The tones and contrast between light and dark shadows define black and white photography.[24] It is important to note that some monochromatic pictures are not always pure blacks and whites, but also contain other hues depending on the process. The cyanotype process produces an image composed of blue tones. The albumen process, first used more than 150 years ago, produces brown tones.

  

Many photographers continue to produce some monochrome images, often because of the established archival permanence of well processed silver halide based materials. Some full color digital images are processed using a variety of techniques to create black and whites, and some manufacturers produce digital cameras that exclusively shoot monochrome.

  

Color

Color photography was explored beginning in the mid-19th century. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light.

  

The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by physicist James Clerk Maxwell in 1855. Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image.

  

Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s.

  

Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images.

  

The development of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability.

  

Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s.

  

Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multilayer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure.

  

Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently available color films still employ a multilayer emulsion and the same principles, most closely resembling Agfa's product.

  

Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963.

  

Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment.

  

Digital photography

Main article: Digital photography

See also: Digital camera and Digital versus film photography

In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born.

  

Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. [25] An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications.

  

Photography gained the interest of many scientists and artists from its inception. Scientists have used photography to record and study movements, such as Eadweard Muybridge's study of human and animal locomotion in 1887. Artists are equally interested by these aspects but also try to explore avenues other than the photo-mechanical representation of reality, such as the pictorialist movement.

  

Military, police, and security forces use photography for surveillance, recognition and data storage. Photography is used by amateurs to preserve memories, to capture special moments, to tell stories, to send messages, and as a source of entertainment. High speed photography allows for visualizing events that are too fast for the human eye.

  

Technical aspects

Main article: Camera

The camera is the image-forming device, and photographic film or a silicon electronic image sensor is the sensing medium. The respective recording medium can be the film itself, or a digital electronic or magnetic memory.[26]

  

Photographers control the camera and lens to "expose" the light recording material (such as film) to the required amount of light to form a "latent image" (on film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on paper or film.

  

The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera).

  

As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens.

  

The movie camera is a type of photographic camera which takes a rapid sequence of photographs on strips of film. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person's eyes and brain merge the separate pictures together to create the illusion of motion.[27]

  

Camera controls are interrelated. The total amount of light reaching the film plane (the 'exposure') changes with the duration of exposure, aperture of the lens, and on the effective focal length of the lens (which in variable focal length lenses, can force a change in aperture as the lens is zoomed). Changing any of these controls can alter the exposure. Many cameras may be set to adjust most or all of these controls automatically. This automatic functionality is useful for occasional photographers in many situations.

  

The duration of an exposure is referred to as shutter speed, often even in cameras that do not have a physical shutter, and is typically measured in fractions of a second. It is quite possible to have exposures from one up to several seconds, usually for still-life subjects, and for night scenes exposure times can be several hours. However, for a subject that is in motion use a fast shutter speed. This will prevent the photograph from coming out blurry.[29]

  

The effective aperture is expressed by an f-number or f-stop (derived from focal ratio), which is proportional to the ratio of the focal length to the diameter of the aperture. Longer lenses will pass less light even though the diameter of the aperture is the same due to the greater distance the light has to travel; shorter lenses (a shorter focal length) will be brighter with the same size of aperture.

  

The smaller the f/number, the larger the effective aperture. The present system of f/numbers to give the effective aperture of a lens was standardized by an international convention. There were earlier, different series of numbers in older cameras.

  

If the f-number is decreased by a factor of √2, the aperture diameter is increased by the same factor, and its area is increased by a factor of 2. The f-stops that might be found on a typical lens include 2.8, 4, 5.6, 8, 11, 16, 22, 32, where going up "one stop" (using lower f-stop numbers) doubles the amount of light reaching the film, and stopping down one stop halves the amount of light.

  

Image capture can be achieved through various combinations of shutter speed, aperture, and film or sensor speed. Different (but related) settings of aperture and shutter speed enable photographs to be taken under various conditions of film or sensor speed, lighting and motion of subjects and/or camera, and desired depth of field. A slower speed film will exhibit less "grain", and a slower speed setting on an electronic sensor will exhibit less "noise", while higher film and sensor speeds allow for a faster shutter speed, which reduces motion blur or allows the use of a smaller aperture to increase the depth of field.

  

For example, a wider aperture is used for lower light and a lower aperture for more light. If a subject is in motion, then a high shutter speed may be needed. A tripod can also be helpful in that it enables a slower shutter speed to be used.

  

For example, f/8 at 8 ms (1/125 of a second) and f/5.6 at 4 ms (1/250 of a second) yield the same amount of light. The chosen combination has an impact on the final result. The aperture and focal length of the lens determine the depth of field, which refers to the range of distances from the lens that will be in focus. A longer lens or a wider aperture will result in "shallow" depth of field (i.e. only a small plane of the image will be in sharp focus). This is often useful for isolating subjects from backgrounds as in individual portraits or macro photography.

  

Conversely, a shorter lens, or a smaller aperture, will result in more of the image being in focus. This is generally more desirable when photographing landscapes or groups of people. With very small apertures, such as pinholes, a wide range of distance can be brought into focus, but sharpness is severely degraded by diffraction with such small apertures. Generally, the highest degree of "sharpness" is achieved at an aperture near the middle of a lens's range (for example, f/8 for a lens with available apertures of f/2.8 to f/16). However, as lens technology improves, lenses are becoming capable of making increasingly sharp images at wider apertures.

  

Image capture is only part of the image forming process. Regardless of material, some process must be employed to render the latent image captured by the camera into a viewable image. With slide film, the developed film is just mounted for projection. Print film requires the developed film negative to be printed onto photographic paper or transparency. Digital images may be uploaded to an image server (e.g., a photo-sharing web site), viewed on a television, or transferred to a computer or digital photo frame. Every type can be printed on more "classical" mediums such as regular paper or photographic paper for examples.

  

Prior to the rendering of a viewable image, modifications can be made using several controls. Many of these controls are similar to controls during image capture, while some are exclusive to the rendering process. Most printing controls have equivalent digital concepts, but some create different effects. For example, dodging and burning controls are different between digital and film processes. Other printing modifications include:

Digital point-and-shoot cameras have become widespread consumer products, outselling film cameras, and including new features such as video and audio recording. Kodak announced in January 2004 that it would no longer sell reloadable 35 mm cameras in western Europe, Canada and the United States after the end of that year. Kodak was at that time a minor player in the reloadable film cameras market. In January 2006, Nikon followed suit and announced that they will stop the production of all but two models of their film cameras: the low-end Nikon FM10, and the high-end Nikon F6. On 25 May 2006, Canon announced they will stop developing new film SLR cameras.[34] Though most new camera designs are now digital, a new 6x6cm/6x7cm medium format film camera was introduced in 2008 in a cooperation between Fuji and Voigtländer.[35][36]

  

According to a survey made by Kodak in 2007 when the majority of photography was already digital, 75 percent of professional photographers say they will continue to use film, even though some embrace digital.[37]

  

The PMA say that in the year 2000 nearly a billion rolls of film were sold each year and by 2011 a mere 20 million rolls, plus 31 million single-use cameras.[38]

  

Quelle:

en.wikipedia.org/wiki/Photography

de.wikipedia.org/wiki/Fotografie

   

High-dynamic-range imaging (HDRI or HDR) is a set of techniques used in imaging and photography to reproduce a greater dynamic range of luminosity than possible using standard digital imaging or photographic techniques. HDR images can represent more accurately the range of intensity levels found in real scenes, from direct sunlight to faint starlight, and is often captured by way of a plurality of differently exposed pictures of the same subject matter.[1][2][3][4]

 

Non-HDR cameras take photographs with a limited exposure range, resulting in the loss of detail in bright or dark areas. HDR compensates for this loss of detail by capturing multiple photographs at different exposure levels and combining them to produce a photograph representative of a broader tonal range.

 

The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR)[5] or standard-dynamic-range (SDR)[6] photographs. HDR images can also be acquired using special image sensors, like oversampled binary image sensor. Tone mapping methods, which reduce overall contrast to facilitate display of HDR images on devices with lower dynamic range, can be applied to produce images with preserved or exaggerated local contrast for artistic effect.

In photography, dynamic range is measured in EV differences (known as stops) between the brightest and darkest parts of the image that show detail. An increase of one EV or one stop is a doubling of the amount of light. Compare that, for example, 210=1024:

High-dynamic-range photographs are generally achieved by capturing multiple standard photographs, often using exposure bracketing, and then merging them into an HDR image. Digital photographs are often encoded in a camera's raw image format, because 8 bit JPEG encoding doesn't offer enough values to allow fine transitions (and introduces undesirable effects due to the lossy compression).

 

The images from any camera that allows manual exposure control can be used to create HDR images. This includes film cameras, though the images may need to be digitized so they can be processed with software HDR methods.

 

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II.[10] As the popularity of this imaging method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7 DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file.[11] The Canon PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.[12] Even some smartphones now include HDR modes, and most platforms have apps that provide HDR picture taking.[13]

 

Color film negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range.[14]

 

Camera characteristics

Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and spectral calibration affect resulting high-dynamic-range images.[15][15]

 

Tone mapping

Main article: Tone mapping

Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast.

 

Software

Several software applications are available on the PC, Mac and Linux platforms for producing HDR files and tone mapped images. Notable titles include

Adobe Photoshop

Dynamic Photo HDR

HDR PhotoStudio

Luminance HDR

Oloneo PhotoEngine

Photomatix Pro

PTGui

Comparison with traditional digital images

Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors that should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.[16][17][18]

 

HDR images often don't use fixed ranges per color channel—other than for traditional images—to represent many more colors over a much wider dynamic range. For that purpose, they don't use integer values to represent the single color channels (e.g.m, 0..255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.

The idea of using several exposures to fix a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, the luminosity range being too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.[20]

 

Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This is effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.[22]

 

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which features dodging and burning prominently, in the context of his Zone System.

 

With the advent of color photography, tone mapping in the darkroom was no longer possible, due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response over the years, or shot in black and white to use tone mapping methods.

Film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force".[23] This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color.[24] The dynamic range of this extended range film has been estimated as 1:108.[25] It has been used to photograph nuclear explosions,[26] for astronomical photography,[27] for spectrographic research,[28] and for medical imaging.[29] Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.

 

Late-twentieth century[edit]

The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.[30] In 1993 the first commercial medical camera was introduced that performed real time capturing of multiple images with different exposures, and producing an HDR video image, by the same group.[31]

 

Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping this result. Global HDR was first introduced in 1993[1] resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.[2]

 

The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory.[32] Mann's method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.[32]

 

In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping.

 

While custom high-dynamic-range digital video solutions had been developed for industrial manufacturing during the 1980s, it was not until the early 2000s that several scholarly research efforts used consumer-grade sensors and cameras.[34] A few companies such as RED[35] and Arri[36] have been developing digital sensors capable of a higher dynamic range. RED EPIC-X can capture HDRx images with a user selectable 1-3 stops of additional highlight latitude in the 'x' channel. The 'x' channel can be merged with the normal channel in post production software. With the advent of low-cost consumer digital cameras, many amateurs began posting tone mapped HDR time-lapse videos on the Internet, essentially a sequence of still photographs in quick succession. In 2010 the independent studio Soviet Montage produced an example of HDR video from disparately exposed video streams using a beam splitter and consumer grade HD video cameras.[37] Similar methods have been described in the academic literature in 2001[38] and 2007.[39]

 

Modern movies have often been filmed with cameras featuring a higher dynamic range, and legacy movies can be upgraded even if manual intervention would be needed for some frames (as this happened in the past with black&white films’ upgrade to color). Also, special effects, especially those in which real and synthetic footage are seamlessly mixed, require both HDR shooting and rendering. HDR video is also needed in all applications in which capturing temporal aspects of changes in the scene demands high accuracy. This is especially important in monitoring of some industrial processes such as welding, predictive driver assistance systems in automotive industry, surveillance systems, to name just a few possible applications. HDR video can be also considered to speed up the image acquisition in all applications, in which a large number of static HDR images are needed, as for example in image-based methods in computer graphics. Finally, with the spread of TV sets with enhanced dynamic range, broadcasting HDR video may become important, but may take a long time to occur due to standardization issues. For this particular application, enhancing current low-dynamic range rendering (LDR) video signal to HDR by intelligent TV sets seems to be a more viable near-term solution.

 

More and more CMOS image sensors now have high dynamic range capability within the pixels themselves. Such pixels are intrinsically non-linear (by design) so that the wide dynamic range of the scene is non-linearly compressed into a smaller dynamic range electronic representation inside the pixel.[41] Such sensors are used in extreme dynamic range applications like welding or automotive.

 

Some other sensors designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. Some of the sensor may even combine the two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.

 

Quelle:

 

en.wikipedia.org/wiki/High-dynamic-range_imaging

 

de.wikipedia.org/wiki/High_Dynamic_Range_Image

  

Photography (see section below for etymology) is the art, science and practice of creating durable images by recording light or other electromagnetic radiation, either chemically by means of a light-sensitive material such as photographic film, or electronically by means of an image sensor.[1] Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. The result in an electronic image sensor is an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing.

 

The result in a photographic emulsion is an invisible latent image, which is later chemically developed into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.

 

Photography has many uses for business, science, manufacturing (e.g. photolithography), art, recreational purposes, and mass communication.

 

The word "photography" was created from the Greek roots φωτός (phōtos), genitive of φῶς (phōs), "light"[2] and γραφή (graphé) "representation by means of lines" or "drawing",[3] together meaning "drawing with light".[4]

 

Several people may have coined the same new term from these roots independently. Hercules Florence, a French painter and inventor living in Campinas, Brazil, used the French form of the word, photographie, in private notes which a Brazilian photography historian believes were written in 1834.[5] Johann von Maedler, a Berlin astronomer, is credited in a 1932 German history of photography as having used it in an article published on 25 February 1839 in the German newspaper Vossische Zeitung.[6] Both of these claims are now widely reported but apparently neither has ever been independently confirmed as beyond reasonable doubt. Credit has traditionally been given to Sir John Herschel both for coining the word and for introducing it to the public. His uses of it in private correspondence prior to 25 February 1839 and at his Royal Society lecture on the subject in London on 14 March 1839 have long been amply documented and accepted as settled fact.

 

History and evolution

Precursor technologies

Photography is the result of combining several technical discoveries. Long before the first photographs were made, Chinese philosopher Mo Di and Greek mathematicians Aristotle and Euclid described a pinhole camera in the 5th and 4th centuries BCE.[8][9] In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments,[10] Ibn al-Haytham (Alhazen) (965–1040) studied the camera obscura and pinhole camera,[9][11] Albertus Magnus (1193–1280) discovered silver nitrate,[12] and Georg Fabricius (1516–71) discovered silver chloride.[13] Techniques described in the Book of Optics are capable of producing primitive photographs using medieval materials. [14][15][16]

 

Daniele Barbaro described a diaphragm in 1566.[17] Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694.[18] The fiction book Giphantie, published in 1760, by French author Tiphaigne de la Roche, described what can be interpreted as photography.[17]

 

The discovery of the camera obscura that provides an image of a scene dates back to ancient China. Leonardo da Vinci mentions natural cameras obscura that are formed by dark caves on the edge of a sunlit valley. A hole in the cave wall will act as a pinhole camera and project a laterally reversed, upside down image on a piece of paper. So the birth of photography was primarily concerned with developing a means to fix and retain the image produced by the camera obscura.

 

The first success of reproducing images without a camera occurred when Thomas Wedgwood, from the famous family of potters, obtained copies of paintings on leather using silver salts. Since he had no way of permanently fixing those reproductions (stabilizing the image by washing out the non-exposed silver salts), they would turn completely black in the light and thus had to be kept in a dark room for viewing.

 

Renaissance painters used the camera obscura which, in fact, gives the optical rendering in color that dominates Western Art. The camera obscura literally means "dark chamber" in Latin. It is a box with a hole in it which allows light to go through and create an image onto the piece of paper.

 

First camera photography (1820s)

Invented in the early decades of the 19th century, photography by means of the camera seemed able to capture more detail and information than traditional media, such as painting and sculpture.[19] Photography as a usable process goes back to the 1820s with the development of chemical photography. The first permanent photoetching was an image produced in 1822 by the French inventor Nicéphore Niépce, but it was destroyed in a later attempt to make prints from it.[7] Niépce was successful again in 1825. He made the View from the Window at Le Gras, the earliest surviving photograph from nature (i.e., of the image of a real-world scene, as formed in a camera obscura by a lens), in 1826 or 1827.[20]

 

Because Niépce's camera photographs required an extremely long exposure (at least eight hours and probably several days), he sought to greatly improve his bitumen process or replace it with one that was more practical. Working in partnership with Louis Daguerre, he developed a somewhat more sensitive process that produced visually superior results, but it still required a few hours of exposure in the camera. Niépce died in 1833 and Daguerre then redirected the experiments toward the light-sensitive silver halides, which Niépce had abandoned many years earlier because of his inability to make the images he captured with them light-fast and permanent. Daguerre's efforts culminated in what would later be named the daguerreotype process, the essential elements of which were in place in 1837. The required exposure time was measured in minutes instead of hours. Daguerre took the earliest confirmed photograph of a person in 1838 while capturing a view of a Paris street: unlike the other pedestrian and horse-drawn traffic on the busy boulevard, which appears deserted, one man having his boots polished stood sufficiently still throughout the approximately ten-minute-long exposure to be visible. Eventually, France agreed to pay Daguerre a pension for his process in exchange for the right to present his invention to the world as the gift of France, which occurred on 19 August 1839.

Meanwhile, in Brazil, Hercules Florence had already created his own process in 1832, naming it Photographie, and an English inventor, William Fox Talbot, had created another method of making a reasonably light-fast silver process image but had kept his work secret. After reading about Daguerre's invention in January 1839, Talbot published his method and set about improving on it. At first, like other pre-daguerreotype processes, Talbot's paper-based photography typically required hours-long exposures in the camera, but in 1840 he created the calotype process, with exposures comparable to the daguerreotype. In both its original and calotype forms, Talbot's process, unlike Daguerre's, created a translucent negative which could be used to print multiple positive copies, the basis of most chemical photography up to the present day. Daguerreotypes could only be replicated by rephotographing them with a camera.[21] Talbot's famous tiny paper negative of the Oriel window in Lacock Abbey, one of a number of camera photographs he made in the summer of 1835, may be the oldest camera negative in existence.[22][23]

 

John Herschel made many contributions to the new field. He invented the cyanotype process, later familiar as the "blueprint". He was the first to use the terms "photography", "negative" and "positive". He had discovered in 1819 that sodium thiosulphate was a solvent of silver halides, and in 1839 he informed Talbot (and, indirectly, Daguerre) that it could be used to "fix" silver-halide-based photographs and make them completely light-fast. He made the first glass negative in late 1839.

 

In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper.

 

Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1884, George Eastman developed an early type of film to replace photographic plates, leading to the technology used by film cameras today.

 

In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize for Physics in 1908.

 

Black-and-white

See also: Monochrome photography

All photography was originally monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost and its "classic" photographic look. The tones and contrast between light and dark shadows define black and white photography.[24] It is important to note that some monochromatic pictures are not always pure blacks and whites, but also contain other hues depending on the process. The cyanotype process produces an image composed of blue tones. The albumen process, first used more than 150 years ago, produces brown tones.

 

Many photographers continue to produce some monochrome images, often because of the established archival permanence of well processed silver halide based materials. Some full color digital images are processed using a variety of techniques to create black and whites, and some manufacturers produce digital cameras that exclusively shoot monochrome.

 

Color

Color photography was explored beginning in the mid-19th century. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light.

 

The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by physicist James Clerk Maxwell in 1855. Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image.

 

Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s.

 

Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images.

 

The development of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability.

 

Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s.

 

Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multilayer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure.

 

Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently available color films still employ a multilayer emulsion and the same principles, most closely resembling Agfa's product.

 

Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963.

 

Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment.

 

Digital photography

Main article: Digital photography

See also: Digital camera and Digital versus film photography

In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born.

 

Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. [25] An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications.

  

Photography gained the interest of many scientists and artists from its inception. Scientists have used photography to record and study movements, such as Eadweard Muybridge's study of human and animal locomotion in 1887. Artists are equally interested by these aspects but also try to explore avenues other than the photo-mechanical representation of reality, such as the pictorialist movement.

 

Military, police, and security forces use photography for surveillance, recognition and data storage. Photography is used by amateurs to preserve memories, to capture special moments, to tell stories, to send messages, and as a source of entertainment. High speed photography allows for visualizing events that are too fast for the human eye.

 

Technical aspects

Main article: Camera

The camera is the image-forming device, and photographic film or a silicon electronic image sensor is the sensing medium. The respective recording medium can be the film itself, or a digital electronic or magnetic memory.[26]

 

Photographers control the camera and lens to "expose" the light recording material (such as film) to the required amount of light to form a "latent image" (on film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on paper or film.

 

The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera).

 

As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens.

 

The movie camera is a type of photographic camera which takes a rapid sequence of photographs on strips of film. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person's eyes and brain merge the separate pictures together to create the illusion of motion.[27]

 

Camera controls are interrelated. The total amount of light reaching the film plane (the 'exposure') changes with the duration of exposure, aperture of the lens, and on the effective focal length of the lens (which in variable focal length lenses, can force a change in aperture as the lens is zoomed). Changing any of these controls can alter the exposure. Many cameras may be set to adjust most or all of these controls automatically. This automatic functionality is useful for occasional photographers in many situations.

 

The duration of an exposure is referred to as shutter speed, often even in cameras that do not have a physical shutter, and is typically measured in fractions of a second. It is quite possible to have exposures from one up to several seconds, usually for still-life subjects, and for night scenes exposure times can be several hours. However, for a subject that is in motion use a fast shutter speed. This will prevent the photograph from coming out blurry.[29]

 

The effective aperture is expressed by an f-number or f-stop (derived from focal ratio), which is proportional to the ratio of the focal length to the diameter of the aperture. Longer lenses will pass less light even though the diameter of the aperture is the same due to the greater distance the light has to travel; shorter lenses (a shorter focal length) will be brighter with the same size of aperture.

 

The smaller the f/number, the larger the effective aperture. The present system of f/numbers to give the effective aperture of a lens was standardized by an international convention. There were earlier, different series of numbers in older cameras.

 

If the f-number is decreased by a factor of √2, the aperture diameter is increased by the same factor, and its area is increased by a factor of 2. The f-stops that might be found on a typical lens include 2.8, 4, 5.6, 8, 11, 16, 22, 32, where going up "one stop" (using lower f-stop numbers) doubles the amount of light reaching the film, and stopping down one stop halves the amount of light.

 

Image capture can be achieved through various combinations of shutter speed, aperture, and film or sensor speed. Different (but related) settings of aperture and shutter speed enable photographs to be taken under various conditions of film or sensor speed, lighting and motion of subjects and/or camera, and desired depth of field. A slower speed film will exhibit less "grain", and a slower speed setting on an electronic sensor will exhibit less "noise", while higher film and sensor speeds allow for a faster shutter speed, which reduces motion blur or allows the use of a smaller aperture to increase the depth of field.

 

For example, a wider aperture is used for lower light and a lower aperture for more light. If a subject is in motion, then a high shutter speed may be needed. A tripod can also be helpful in that it enables a slower shutter speed to be used.

 

For example, f/8 at 8 ms (1/125 of a second) and f/5.6 at 4 ms (1/250 of a second) yield the same amount of light. The chosen combination has an impact on the final result. The aperture and focal length of the lens determine the depth of field, which refers to the range of distances from the lens that will be in focus. A longer lens or a wider aperture will result in "shallow" depth of field (i.e. only a small plane of the image will be in sharp focus). This is often useful for isolating subjects from backgrounds as in individual portraits or macro photography.

 

Conversely, a shorter lens, or a smaller aperture, will result in more of the image being in focus. This is generally more desirable when photographing landscapes or groups of people. With very small apertures, such as pinholes, a wide range of distance can be brought into focus, but sharpness is severely degraded by diffraction with such small apertures. Generally, the highest degree of "sharpness" is achieved at an aperture near the middle of a lens's range (for example, f/8 for a lens with available apertures of f/2.8 to f/16). However, as lens technology improves, lenses are becoming capable of making increasingly sharp images at wider apertures.

 

Image capture is only part of the image forming process. Regardless of material, some process must be employed to render the latent image captured by the camera into a viewable image. With slide film, the developed film is just mounted for projection. Print film requires the developed film negative to be printed onto photographic paper or transparency. Digital images may be uploaded to an image server (e.g., a photo-sharing web site), viewed on a television, or transferred to a computer or digital photo frame. Every type can be printed on more "classical" mediums such as regular paper or photographic paper for examples.

 

Prior to the rendering of a viewable image, modifications can be made using several controls. Many of these controls are similar to controls during image capture, while some are exclusive to the rendering process. Most printing controls have equivalent digital concepts, but some create different effects. For example, dodging and burning controls are different between digital and film processes. Other printing modifications include:

Digital point-and-shoot cameras have become widespread consumer products, outselling film cameras, and including new features such as video and audio recording. Kodak announced in January 2004 that it would no longer sell reloadable 35 mm cameras in western Europe, Canada and the United States after the end of that year. Kodak was at that time a minor player in the reloadable film cameras market. In January 2006, Nikon followed suit and announced that they will stop the production of all but two models of their film cameras: the low-end Nikon FM10, and the high-end Nikon F6. On 25 May 2006, Canon announced they will stop developing new film SLR cameras.[34] Though most new camera designs are now digital, a new 6x6cm/6x7cm medium format film camera was introduced in 2008 in a cooperation between Fuji and Voigtländer.[35][36]

 

According to a survey made by Kodak in 2007 when the majority of photography was already digital, 75 percent of professional photographers say they will continue to use film, even though some embrace digital.[37]

 

The PMA say that in the year 2000 nearly a billion rolls of film were sold each year and by 2011 a mere 20 million rolls, plus 31 million single-use cameras.[38]

 

Quelle:

en.wikipedia.org/wiki/Photography

de.wikipedia.org/wiki/Fotografie

 

Passau ist eine kreisfreie Universitätsstadt im Regierungsbezirk Niederbayern in Ostbayern. Sie liegt an der Grenze zu Österreich sowie am Zusammenfluss der Flüsse Donau, Inn und Ilz und wird deshalb auch „Dreiflüssestadt“ genannt. Mit fast 50.000 Einwohnern ist Passau die zweitgrößte Stadt des Regierungsbezirks.

Die Stadt Passau liegt am Zusammenfluss der drei Flüsse Donau, Inn und Ilz. Die Flüsse Donau und Inn haben sich hier während der Hebung des bayerischen Waldes im Spät-Tertiär und Quartär in das kristalline Grundgebirge eingeschnitten. Die Folge war die Ausbildung eines antezedenten Durchbruchstals. Hierbei schneidet sich der Fluss aktiv, mit der tektonischen Hebung Schritt haltend, in den aufsteigenden Gebirgskörper ein. Charakteristisch hierbei ist die stellenweise hohe Reliefenergie. Petrografisch wird der Passauer Raum – typisch für das Moldanubikum – von metamorphen Gesteinen, wie Gneisen und Diatexiten, dominiert, die vielerorts von paläozoischen Plutoniten durchsetzt sind. Hierbei handelt es sich meist um Granite (Hauzenberg-, Haidmühle-, Schärding-, Peuerbach-Granit), während Diorite nur vereinzelt vorzufinden sind. Zwei bedeutende tektonische Störungszonen, der Bayerische Pfahl und der Passauer Pfahl, verlaufen nördlich des Stadtgebiets. Südlich von Passau grenzt das Molassebecken des Alpenvorlandes an (Unterbayerisches Hügelland). Dieser alpine „Schutttrog“ ist mit tertiären Sedimenten der Süßwasser- und Meeresmolasse verfüllt und weist eine durchgehende Abdachung zur Donau und zum unteren Inn auf. Die flachwellige Erscheinung dieses Gebiets ist auf Solifluktion und fluviale Abtragung während der letzten Kaltzeiten zurückzuführen. Die tertiären Sedimente sind zudem stellenweise von pleistozänen Lockersedimenten, wie Schotter, „durchsetzt“, die durch den die Alpen entwässernden Fluss Inn abgelagert wurden. Auch das äolische Sediment Löss bzw. das Lössderivat Lösslehm wird hier vereinzelt vorgefunden.

Passau liegt auf 48° Nord auf der nördlichen Hemisphäre. Dadurch befindet sich die Stadt überwiegend im Einfluss von Luftströmungen aus westlicher Himmelsrichtung. Wie aus dem Klimadiagramm ersichtlich wird, kann Passau dem warmgemäßigten Klima zugeordnet werden. Ein kontinentaler Einschlag ist im Passauer Raum zusätzlich vorhanden und ist gekennzeichnet durch zum Teil sehr kalte und schneereiche Winter und heiße und trockene Sommer. Im Sommer treten auch Wärmegewitter auf.

 

Im Mittel gibt es 36 Sommertage mit einer Höchsttemperatur von über 25 °C. Demgegenüber stehen 115 Frosttage mit einer Tiefsttemperatur von unter 0 °C. Die niederschlagsärmsten Monate sind Oktober und November. Jährlich führt der Altweibersommer zu milden Temperaturen im Spätjahr.

 

Bedingt durch seine Talkessellage und den Zusammenfluss der wasserreichen Flüsse Donau und Inn kommt es häufig zu Nebel und Hochnebel.

 

Das nebenstehende Klimadiagramm gibt die Daten einer Messstelle in Fürstenzell (Nähe Passau) wieder. Allerdings liegt diese Messstelle nahezu 100 Meter höher als Passau selbst.

 

Die Aufteilung von Passau in Stadtteile ist eher statistischer Natur. Offizielle oder politische Stadtteile gibt es nicht. Bis spätestens 2013 gab es acht statistische Stadtteile, die im Wesentlichen Gemarkungs- oder ehemalige Gemeindegrenzen widerspiegeln: Altstadt, Grubweg, Hals, Hacklberg, Heining, Haidenhof Nord, Haidenhof Süd und Innstadt. 2013 erfolgte eine Neueinteilung in 16 Bürgerversammlungsgebiete. Trotz der geänderten Bezeichnung besitzen diese noch am ehesten Stadtteilscharakter und werden deshalb in der Umgangssprache weiterhin Stadtteile genannt.

 

Die 16 Bürgerversammlungsgebiete sind: Altstadt/Innenstadt, Auerbach, Grubweg, Hacklberg, Haidenhof Nord, Haidenhof Süd, Hals, Heining, Innstadt, Kohlbruck, Neustift, Patriching, Rittsteig, Schalding links der Donau, Schalding rechts der Donau und St. Nikola

Eine erste keltische Siedlung lag in der La-Tène-Zeit auf dem Altstadthügel mit einem Donauhafen in Höhe des heutigen alten Rathauses. Dieses keltische Oppidum Boiodurum wurde im ersten nachchristlichen Jahrhundert von den Römern erobert und Teil der römischen Provinz Raetia. An der Stelle des heutigen Domes entstand das römische Kastell Batavis (Castra Batava) als Teil der Limesbefestigung. Der Name „Batavis“ leitet sich von den zunächst dort stationierten germanischen Söldnern vom Stamm der Bataver ab. Aus Batavis entwickelte sich der heutige Name „Passau“.

 

In der Spätantike entstand am anderen Innufer in der römischen Provinz Noricum das Kastell Boiotro, das bis zum Abzug der Romanen Bestand hatte. In der Vita Severini wird beschrieben, dass die dortige Garnison zunächst länger als andernorts ausharrte, als in der zweiten Hälfte des 5. Jahrhunderts immer öfter der Sold ausblieb. Wohl zwischen 476 und 490 verließen die römischen Truppen dann die Region.

Die Bajuwaren, die das Gebiet im 6. Jahrhundert in Besitz nahmen, errichteten auf der Halbinsel eine Herzogsburg. Bereits im Jahr 739 war Passau Bischofssitz, zu dieser Zeit wurde auch das Kloster Niedernburg gegründet, welches über große Ländereien im Einzugsbereich der Ilz verfügte. Im 11. Jahrhundert war dort Gisela, Schwester des Kaisers Heinrich II. und Witwe des Königs von Ungarn, Stefan I., Äbtissin. Als 999 vom Kaiser die weltliche Herrschaft über die Stadt dem Passauer Bischof Christian übertragen wurde, endete die Vorherrschaft des Klosters. Zwischen 1078 und 1099 verloren die Passauer Bischöfe vorübergehend die Herrschaftsrechte über die Stadt an die neugeschaffene Burggrafschaft Passau und den von König Heinrich IV. eingesetzten Grafen Ulrich. Nach dessen Tod fielen die Rechte zurück an die Bischöfe.

In der ersten Hälfte des 12. Jahrhunderts war das Passauer Schmiedehandwerk bedeutsam. 1217 wurde Passau zu einem Fürstbistum. Das Kloster Niedernburg, das dem Bischof 1161 von Friedrich I. Barbarossa geschenkt wurde, wurde zum Sitz des Fürstbistums. Passau erhielt 1225 Stadtrechte verliehen. Es gab mehrere Aufstände der Bürger gegen die Herrschaft der Fürstbischöfe, zuletzt 1367/68, die aber allesamt scheiterten. Andererseits entwickelte das Bistum einen beträchtlichen Wohlstand und weckte immer wieder Begehrlichkeiten bei den Nachbarn Bayern und Österreich.

 

1477 wurde dem Christen Christoph Eysengreißheimer vorgeworfen, er habe den "jüdischen Feinden des Heilands" acht gestohlene Hostien verkauft, die diese dann geschändet haben sollen. Die Angeklagten wurden inhaftiert, gefoltert und nach Geständnissen enthauptet, sofern sie sich vorher haben taufen lassen, andernfalls mit glühenden Zangen zerfleischt und verbrannt.

 

Passau ist der Entstehungsort des Ausbunds, des ältesten Gesangbuchs des Protestantismus, bei den Amischen noch heute benutzt. Seine Kernsammlung entstand zwischen 1535 und 1540 im Verlies der Passauer Burg. Die Autoren waren inhaftierte Täufer. Einige von ihnen verstarben bereits während der Gefangenschaft. Die meisten der gefangenen Täufer erlitten im Anschluss an die Haftzeit den Märtyrertod. Die gedruckte Erstausgabe trägt den Titel: Etliche schöne christliche Gesäng wie sie in der Gefengkniß zu Passau im Schloß von den Schweizer Brüdern durch Gottesgnad gedicht und gesungen warden. Ps. 139.

 

1552 wurde in der Stadt der Passauer Vertrag geschlossen, der ein Wegbereiter für die Tolerierung der Konfessionen im Augsburger Religionsfrieden war.

1622–1633 wurde die Philosophisch-Theologische Hochschule gegründet. 1676 fand in Passau die sogenannte Kaiserhochzeit von Leopold I. und Eleonore von Pfalz-Neuburg statt. Die Stadt wurde mehrmals von Überschwemmungen und Bränden heimgesucht. 1662 legte ein Brand die gesamte Stadt in Schutt und Asche. Italienische Baumeister (Carlone und Lurago) bauten die Stadt danach wieder auf und gaben der Stadt ihr heutiges südländisch anmutendes barockes Aussehen. 1689 erschien die erste Passauer Zeitung.[5] Passaus Zeit als selbständiges Fürstentum endete mit der Säkularisation 1803, durch die es zu Bayern kam. 1821 wurde die Stadt wieder Bischofssitz. Von 1806 bis 1839 war Passau Hauptstadt des Unterdonaukreises. 1860 wurde die Eisenbahnlinie nach Straubing eröffnet. 1870 wurde St. Nikola eingemeindet, 1909 Haidenhof und 1923 Beiderwies.

 

Seit 1942 befand sich in Passau ein Außenlager des Konzentrationslagers Dachau. Die Häftlinge wurden beim Bau eines Unterwasserkraftwerks beim heutigen Stausee Oberilzmühle eingesetzt. Ab November 1942 unterstand dieses Außenlager dem KZ Mauthausen, welches im März 1944 die Außenstelle Passau II und im März 1945 die Außenstelle Passau III eröffnete. Die Häftlinge wurden hier in den Waldwerken Passau-Ilzstadt und bei der Bayer. Lloyd zum Entladen von Schiffen eingesetzt.

 

Durch die Eingemeindungen im Zuge der Gemeindegebietsreform wuchs das Stadtgebiet von 20 auf 70 Quadratkilometer und die Einwohnerzahl stieg um 40 % auf 50.000. Seit 1978 ist Passau Universitätsstadt. Die Universität hat Schwerpunkte in den Bereichen Jura, Betriebswirtschaftslehre und Informatik.

1980 wurde die Stadt Passau für ihre Bemühungen um den europäischen Integrationsgedanken mit dem Europapreis ausgezeichnet. 1993 überschritt Passau die Marke von 50.000 Einwohnern. Es ist das Oberzentrum der Region Donau-Wald.

 

In den Monaten Mai und Juni 2013 kam es in der Stadt zu den schwersten Überschwemmungen seit fünfhundert Jahren, als am Pegel Passau/Donau die historische Marke von 12,89 m erreicht wurde. Die Trinkwasserversorgung musste vorübergehend eingestellt werden, an Schulen und der Universität setzte der Lehrbetrieb aus. Während und vor allem nach der Hochwasserkatastrophe ist die engagierte Hilfe der Passauer Universitätsstudenten zu erwähnen. Die von den Studenten gegründete und verwaltete Facebook-Initiative "Passau räumt auf" wurde im Jahr 2013 mit dem Deutschen Bürgerpreis ausgezeichnet.

Die Altstadt liegt auf einer schmalen Halbinsel am Zusammenfluss von Inn und Donau. Der Dom „St. Stephan“ steht auf einem kleinen Hügel. Zu beiden Flussufern hin fallen die Gassen teilweise in steilen Treppen ab.

 

Das Stadtbild hat dank italienischer Baumeister ein südländisch anmutendes Flair und ist geprägt durch Häuser im Stil der Inn- und Salzachbauweise. Deshalb wird Passau des Öfteren auch als das Venedig Bayerns bezeichnet. Jenseits der beiden Flussufer steigt die Landschaft in grünen Hügeln an. Überragt wird die Stadt im Norden von der Veste Oberhaus und im Süden von der Wallfahrtskirche Mariahilf.

 

Der westlich der Altstadt gelegene Bereich zwischen Hauptbahnhof und St.-Nikola-Kloster wurde bis 2011 neu gestaltet. Da die Deutsche Bahn Gleisanlagen abgebaut hatte und der Rückzug der Bundeswehr eine neue Veranstaltungsstätte in Kohlbruck ermöglichte, konnte hier ein großer Bereich in der Innenstadt städtebaulich neu gestaltet werden. Dieses als Neue Mitte bekannt gewordene Projekt wurde im September 2008 abgeschlossen.

Zu Füßen der Burganlage Veste Niederhaus fließt zunächst die Ilz von links und kurz danach von rechts der Inn der Donau zu. Das Wasser des Inn, das von den Alpen kommt, ist grün, das der Donau blau und das der aus einem Moorgebiet kommenden Ilz schwarz, so dass die Donau ein längeres Stück nach dem Zusammenfluss drei Wasserfarben (Grün, Blau, Schwarz) aufweist. Auffallend ist dabei, wie stark das grüne Wasser des Inn das Wasser der Donau beiseite drängt. Dies hängt neben der zeitweise sehr großen Wassermenge des Inn hauptsächlich mit der stark unterschiedlichen Tiefe der beiden Gewässer (Inn: 1,90 Meter / Donau: 6,80 Meter) zusammen – „der Inn überströmt die Donau“. Zwar führt der Inn im Jahresmittel auch etwa fünf Prozent mehr Wasser als die Donau selbst, doch rührt dies hauptsächlich von den starken Hochwässern des Inn bei der Schneeschmelze her, während die Donau eine deutlich konstantere Wasserführung aufweist. Sie führt die meiste Zeit des Jahres (sieben Monate, Oktober bis April) mehr Wasser mit sich als der Inn.

Auch wenn der optische Eindruck es also nahelegt, von der Mündung der Donau in den Inn zu sprechen, ist die Namensgebung Donau für den sich ergebenden Strom weiterhin gerechtfertigt – nicht nur durch die Länge der zurückgelegten Wegstrecken (Donau: 647 km / Inn: 510 km).

 

Die Lage am Zusammenfluss mehrere großer Flüsse sorgt immer wieder für Hochwasser-Ereignisse, von denen vor allem auch die historische Innenstadt betroffen ist.

 

Sehenswerte Bauwerke

siehe auch Liste der Baudenkmäler in Passau

 

Der Stephansdom ist Sitz des Passauer Bischofs. Er geht zurück auf eine Kirche, die schon um 450 existierte. Die Bischofskirche wurde 730 erstmals urkundlich erwähnt und war seit 739 Kathedrale der Diözese. Zwischen 1280 und 1325 wurde sie durch einen frühgotischen Dom ersetzt. Ein Ostteil im spätgotischen Stil wurde von 1407 bis 1560 angebaut. Durch den Stadtbrand von 1662 wurde der Dom mit Ausnahme der Außenmauern des Ostteils vollständig zerstört. Von 1668 bis 1693 wurde der Dom von Carlo Lurago neu errichtet, diesmal im Barockstil. Besonders zu erwähnen sind im Innenausbau Stuckaturen von Giovanni Battista Carlone und Gemälde von Johann Michael Rottmayr in den Seitenaltären. Der Stephansdom ist der größte Barockdom nördlich der Alpen. Die Domorgel ist mit 17.974 Pfeifen und 233 Registern die größte Domorgel der Welt sowie die größte Orgel außerhalb der USA.

 

Neben dem Dom ist auf dem Domplatz, dem höchsten Punkt der Altstadt, noch das Lamberg-Palais erwähnenswert; dort wurde 1552 der Passauer Vertrag geschlossen. Südlich des Doms steht die Alte Residenz, die heute das Landgericht beherbergt. Im ehemaligen fürstbischöflichen Opernhaus befindet sich das Stadttheater. Das 1645 ursprünglich als Ballhaus errichtete Gebäude wurde ab 1770 als Hofkomödienhaus genutzt und schließlich 1783 auf Anweisung von Fürstbischof von Auersperg durch Johann Georg Hagenauer zum Opernhaus umgebaut.

Auf dem Domplatz erhebt sich das Denkmal für den bayerischen König Max I. Joseph. Es wurde zur Erinnerung daran aufgestellt, dass das Fürstentum Passau nach der Säkularisation 1803 im neu formierten Königreich Bayern aufging. Auf einem hohen kubischen Granitsockel, der die bayerische Verfassung symbolisieren soll, erhebt sich eine bronzene Statue des Königs in Krönungsornat mit segnendem Gestus. Der Entwurf des 1824 datierten und 1826 aufgestellten Monuments geht wohl auf Karl Eichler zurück, die Statue wurde von Christian Jorhan d. J. modelliert und von Karl Samassa gegossen.

 

Am Donauufer befindet sich das Rathaus aus dem 14. Jahrhundert mit seinem 38 Meter hohen Turm; er wurde 1890 hinzugefügt.

 

Am Rathausplatz beherbergt das Patrizierhaus Wilder Mann das Passauer Glasmuseum mit Exponaten des weltberühmten Böhmischen Glases.

 

Östlich des Rathausplatzes steht das 1848 bis 1851 von Friedrich von Gärtner errichtete klassizistische Hauptzollamt.

 

Unweit des Rathauses befindet sich die ehemalige Jesuitenkirche St. Michael mit dem benachbarten Komplex des ehemaligen Jesuitenkollegs, weiter in Richtung Ortspitze das ehemalige Benediktinerinnenkloster Niedernburg. Im Ort steht das Bürgerliche Waisenhaus, das vom Schiffbaumeister Lukas Kern 1749 gestiftet wurde. Das Gebäude wurde 1750 bis 1755 von Domkapitel-Baumeister Johann Michael Schneitmann erbaut. Vor dem Waisenhaus steht ein Johannes-Nepomuk-Standbild des Passauer Bildhauers Joseph Carl Hofer aus dem Jahr 1759.

 

Die so genannte Ortspitze liegt am Zusammenfluss von Donau, Inn und Ilz. Die Geschützbastion in Form eines Kleeblattes aus dem Jahr 1531 sicherte früher die Flusstäler gegen Osten.

 

Die Pfarrkirche St. Paul wurde 1050 erstmals urkundlich erwähnt; der heutige Baubestand stammt aus den Jahren 1663 bis 1678. Am Rindermarkt daneben steht die zweischiffige Spitalkirche von 1380 des 1200 gegründeten St.-Johannes-Spitals.

 

In der Bräugasse befindet sich das Museum Moderner Kunst (MMK), gegründet von Hanns Egon Wörlen, dem Sohn des Malers Georg Philipp Wörlen.

 

Im Zentrum der Altstadt liegt der Residenzplatz mit seinen Patrizierhäusern und der Neuen Bischöflichen Residenz. In der Residenz ist das Domschatz- und Diözesanmuseum zu besichtigen. In der Mitte des Platzes befindet sich der Wittelsbacherbrunnen, der 1903 anlässlich der 100-jährigen Zugehörigkeit der Stadt zu Bayern von Jakob Bradl aus München errichtet wurde. Das Herberstein-Palais (Schustergasse 4) mit seiner durch Wandpilaster gegliederten Fassade besitzt einen Renaissance-Arkadenhof im italienischen Stil von 1590 und beherbergt das Amtsgericht Passau.

Lohnend ist ein Spaziergang entlang der malerischen und sonnigen Innpromenade. Dort kommt man am Schaiblingsturm vorbei, einem runden Wehrturm, der im Mittelalter zum Schutz des Salzhafens errichtet wurde.

 

Die Ludwigsstraße und ihre Nebenstraßen bilden die Fußgängerzone mit Geschäften und Cafés. Am Eck zur Heiliggeistgasse steht die Votivkirche, die Klosterkirche des ehemaligen Franziskanerklosters.

 

In der Schießgrabengasse befindet sich das Bürgerliche Zeughaus. In der Theresienstraße steht die 1856 nach Plänen von Friedrich Bürklein errichtete evangelische Stadtpfarrkirche.

Jenseits der Donau ragt auf einem Hügel die mächtige Veste Oberhaus auf. Sie beherbergt unter anderem das Oberhausmuseum mit Stadtmuseum und weiteren Sammlungen mit dem Schwerpunkt Ostbayern und Nachbarländer Böhmen und Österreich. Unterhalb der Veste, mit ihr durch einen Wehrgang verbunden, steht zwischen Donau und Ilz die Veste Niederhaus, die sich in Privatbesitz befindet. An den Burgberg angelehnt zur Ilzseite hin steht die ehemalige Wallfahrtskirche St. Salvator.

 

Einige wenige hundert Meter donauaufwärts befindet sich Schloss Freudenhain, das von 1785 bis 1792 vom Passauer Fürstbischof Kardinal Joseph Franz Anton Graf von Auersperg erbaut wurde. Es beherbergt das nach ihm benannte Auersperg-Gymnasium Freudenhain. Darunter nahe der Uferstraße steht ein spätgotisches Herrenhaus mit barocker Fassade, der letzte Rest des ehemaligen Schlosses Eggendobl.

 

Die Ilzstadt gegenüber der Altstadt hat durch Hochwassersanierung der Nachkriegszeit, die den Abriss einer ganzen Häuserzeile mit sich brachte, viel von ihrer historischen Substanz eingebüßt. Die Ilzstadt war ursprünglich eine Säumer- und Fischersiedlung und Umschlagplatz für den Salzhandel nach Böhmen. Dort begann der Goldene Steig. Über den Häusern der Ilzstadt steht die Pfarrkirche St. Bartholomäus mit stämmigem romanischem Turm und gotischem Schiff.

 

Ilzaufwärts befindet sich der Stadtteil Hals, der von der malerischen Burgruine Hals überragt wird. Der Name kommt von der Lage am Hals der engen Halser Ilzschleifen. Am Marktplatz der ehemaligen Marktgemeinde Hals ist vor dem Rathaus aus dem Jahr 1510 noch ein Pranger zu sehen. Mit dem Bau der Kur- und Wasserheilanstalt Bavaria-Bad 1890 kamen Kurgäste wie der Schriftsteller Peter Rosegger und der Komponist Franz Lehár nach Hals. Im Ersten Weltkrieg wurde die Anlage allerdings geschlossen. Seit 1920 wird das Wasser der Ilz hinter Hals zur Stromerzeugung durch ein Wehr zu einem See gestaut. Dort beginnt auch der Ilztal-Wanderweg, auf ehemaligen Holz-Triftwegen, darunter ein begehbarer Triftstollen.

Jenseits des Inn erhebt sich auf einem Hügel die Wallfahrtskirche Mariahilf über der Innstadt. Die Wallfahrtsstiege hat 321 Stufen. In der Innstadt gegenüber der Altstadt ist noch das mittelalterliche Severinstor mit Barbakane von 1412 erhalten. Der zugehörige Torturm wurde 1820 abgetragen. Daneben sind der runde Peichterturm von 1403 am Beiderbach und Teile der Innstadtmauer von 1410 mit Vierecktürmen und Zwinger zu sehen. Im Zentrum der Innstadt steht am kleinen Kirchenplatz die Kirche St. Gertraud und gegenüber das Rokoko-Patrizierhaus Zum schwarzen Adler. Ebenfalls in der Innstadt befindet sich das Römermuseum mit der Ausgrabungsstätte des Römerlagers Boiotro. Erwähnenswert ist auch die Severinskirche, die in ihren Fundamenten bis in die spätantike Zeit zurückgeht.

 

Empfehlenswert ist der Blick vom Innsteg, von der Bevölkerung Fünferlsteg (nach der früheren Brückenmaut von 5 Pfennigen) genannt, auf die Altstadt. Der Steg verbindet die Innstadt mit der auf der gegenüberliegenden Seite des Inn sich an die Altstadt anschließenden Universität, deren Verwaltung sich im ehemaligen Augustinerchorherrenstift St. Nikola befindet.

Das Theater im Fürstbischöflichen Opernhaus (Stadttheater Passau) ist Sitz des Musiktheaters des Landestheaters Niederbayern. Das Schauspiel hat seinen Sitz in Landshut.

Seit 1953 finden die Festspiele Europäische Wochen statt. Von US-Offizieren gegründet, war es das erste Festival im Nachkriegsdeutschland, das sich dem Europagedanken verschrieb. Zu seinen Lebzeiten war Lord Yehudi Menuhin oft Gastkünstler bei den Festspielen. Jedes Jahr kommen hochkarätige Künstler, wie zum Beispiel Krzysztof Penderecki, aber auch Politiker aus ganz Europa nach Passau und Umgebung, denn mit dem Fall des Eisernen Vorhangs finden die zahlreichen Veranstaltungen nun nicht mehr nur in Südostbayern und Oberösterreich, sondern auch in Südböhmen statt. Von 1995 bis 2011 befanden sich die Festspiele unter der Intendanz von Pankraz Freiherr von Freyberg, der sie seitdem thematisch mit jährlich wechselnden Motti ausrichtet, z. B. 2007: „Im Europäischen Haus“. Seit 2012 ist Peter Baumgardt Intendant der Festspiele

In den siebziger Jahren entstand mit dem Scharfrichterhaus in der Milchgasse eine Kleinkunst- und Kabarettistenszene, die über die Stadt hinaus bekannt wurde. Die Scharfrichterbühne setzte damals einen Gegenpol zur kleinbürgerlich-konservativen Atmosphäre der Stadt und führte zu starker Polarisierung in der Bevölkerung. Seit 1983 wird anlässlich der Passauer Kabarett-Tage alljährlich einer der bedeutendsten deutschen Kabarettpreise, das „ScharfrichterBeil“, verliehen. Zu den bekanntesten Preisträgern gehören Hape Kerkeling (1983), Urban Priol (1986) und Günter Grünwald (1988).

 

Die aus der Zeit des Nationalsozialismus stammende Nibelungenhalle, Austragungsort des Politischen Aschermittwochs der CSU von 1975 bis 2003, wurde im Februar/März 2004 abgerissen. Als funktionale Nachfolgerin der Nibelungenhalle wurde die Dreiländerhalle im Stadtteil Kohlbruck neu errichtet. Um den ehemaligen Standort der Nibelungenhalle herum wurde das Aufgeben von Bahnflächen durch die DB AG sowie der Abriss der Halle selbst genutzt um dieses Gebiet neu zu gestalten. Dieses als Neue Mitte bekannte Projekt umfasst mehrere Einkaufsmöglichkeiten, ein unterirdisches Multiplex-Kino sowie einen Büro- und Hotelturm.

 

Es gibt begründete Hinweise, dass das Nibelungenlied aus Passau oder seiner Umgebung stammt. Somit kann sich Passau „Nibelungenstadt“ nennen.

 

An Volksfesten gibt es neben der Maidult und der Herbstdult im September, welche beide seit 2005 auf einem neu gestalteten Gelände in Kohlbruck stattfinden, das dreitägige Haferlfest in der Ilzstadt im Juli und das zweitägige alle zwei Jahre stattfindende Bürgerfest in der Altstadt im Juni. Im Ortsteil Hals findet auf einer Insel in der Ilz alljährlich das Inselfest statt.

 

In Zusammenarbeit mit der Universität Passau haben sich mit dem alle zwei Jahre stattfindenden Internationalen Filmfestival Passau, dem iberoamerikanischen Filmfest muestra! und den Crank-Cookie-Kurzfilmtagen drei Festivals für Filmschaffende etabliert.

 

Passau (previously Latin: Batavis or Batavia) is a town in Lower Bavaria, Germany. It is also known as the Dreiflüssestadt or "City of Three Rivers," because the Danube is joined at Passau by the Inn from the south and the Ilz from the north.

 

Passau's population is 50,415, of whom about 11,000 are students at the local University of Passau. The university, founded in the late 1970s, is the extension of the Institute for Catholic Studies (Katholisch-Theologische Fakultät) founded in 1622.[2] It is renowned in Germany for its institutes of Economics, Law, Theology, Computer Sciences and Cultural Studies.

In the 2nd century BC, many of the Boii tribe were pushed north across the Alps out of northern Italy by the Romans. They established a new capital called Boiodurum by the Romans (from Gaulish Boioduron), now within the Innstadt district of Passau.[3]

 

Passau was an ancient Roman colony of ancient Noricum called Batavis, Latin for "for the Batavi." The Batavi were an ancient Germanic tribe often mentioned by classical authors, and they were regularly associated with the Suebian marauders, the Heruli.

 

During the second half of the 5th century, St. Severinus established a monastery here. In 739, an English monk called Boniface founded the diocese of Passau and this was the largest diocese of the Holy Roman Empire for many years.

 

In the Treaty of Passau (1552), Archduke Ferdinand I, representing Emperor Charles V, secured the agreement of the Protestant princes to submit the religious question to a diet. This led to the Peace of Augsburg in 1555.

 

During the Renaissance and early modern period, Passau was one of the most prolific centres of sword and bladed weapon manufacture in Germany (after Solingen). Passau smiths stamped their blades with the Passau wolf, usually a rather simplified rendering of the wolf on the city's coat-of-arms. Superstitious warriors believed that the Passau wolf conferred invulnerability on the blade's bearer, and thus Passau swords acquired a great premium. As a result, the whole practice of placing magical charms on swords to protect the wearers came to be known for a time as "Passau art." (See Eduard Wagner, Cut and Thrust Weapons, 1969). Other cities' smiths, including those of Solingen, recognized the marketing value of the Passau wolf and adopted it for themselves. By the 17th century, Solingen was producing more wolf-stamped blades than Passau was. In 1662, a devastating fire consumed most of the city. Passau was subsequently rebuilt in the Baroque style.

Passau was secularised and divided between Bavaria and Salzburg in 1803. The portion belonging to Salzburg became part of Bavaria in 1805.

From 1892 until 1894, Adolf Hitler and his family lived in Passau. The city archives mention Hitler being in Passau on four different occasions in the 1920s for speeches. On November 3, 1902 Heinrich Himmler and his family arrived from Munich. They lived at Theresienstraße 394 (currently Theresienstraße 22) until September 2, 1904. Himmler maintained contact with locals until May 1945.

 

During World War II, the town housed three sub-camps of the infamous Mauthausen-Gusen concentration camp: Passau I (Oberilzmühle), Passau II (Waldwerke Passau-Ilzstadt) and Passau III (Jandelsbrunn).

 

On May 3, 1945, a message from Major General Stanley Eric Reinhart’s 261st Infantry Regiment stated at 3:15 am: "AMG Officer has unconditional surrender of PASSAU signed by Burgermeister, Chief of Police and Lt. Col of Med Corps there. All troops are to turn themselves in this morning."

 

It was the site of a post World War II American sector displaced persons camp. Even now there are some sights pertaining to World War II in the city of Passau.

 

On the 2nd of June 2013 the old town suffered from severe flooding as a result of several days of rain and its location at the confluence of three rivers [4]

Till 2013, the City of Passau was subdivided into eight statistical districts, which in general coincide with formerly separate municipalities. Since 2013, the city is divided in 16 so-called areas of open council ("Bürgerversammlungsgebiete").

 

Main sights

Tourism in Passau focuses mainly on the three rivers, the St. Stephen's Cathedral (Der Passauer Stephansdom) and the "Old City" (Die Altstadt). With 17,774 pipes and 233 registers, the organ at St. Stephen's was long held to be the largest church pipe organ in the world and is today second in size only to the organ at First Congregational Church, Los Angeles, which was expanded in 1994. Organ concerts are held daily between May and September. St.Stephen's is a true masterpiece of Italian Baroque, built by Italian architect Carlo Lurago and decorated in part by Carpoforo Tencalla. Many river cruises down the Danube start at Passau and there is a cycling path all the way down to Vienna. It is also notable for its gothic and baroque architecture. The town is dominated by the Veste Oberhaus and the former fortress of the Bishop, on the mountain crest between the Danube and the Ilz rivers. Right beside the town hall is the Scharfrichterhaus, an important jazz and cabaret stage on which political cabaret is performed.

Quelle:

en.wikipedia.org/wiki/Passau

de.wikipedia.org/wiki/Passau

 

Photography (see section below for etymology) is the art, science and practice of creating durable images by recording light or other electromagnetic radiation, either chemically by means of a light-sensitive material such as photographic film, or electronically by means of an image sensor.[1] Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. The result in an electronic image sensor is an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing.

 

The result in a photographic emulsion is an invisible latent image, which is later chemically developed into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.

 

Photography has many uses for business, science, manufacturing (e.g. photolithography), art, recreational purposes, and mass communication.

 

The word "photography" was created from the Greek roots φωτός (phōtos), genitive of φῶς (phōs), "light"[2] and γραφή (graphé) "representation by means of lines" or "drawing",[3] together meaning "drawing with light".[4]

 

Several people may have coined the same new term from these roots independently. Hercules Florence, a French painter and inventor living in Campinas, Brazil, used the French form of the word, photographie, in private notes which a Brazilian photography historian believes were written in 1834.[5] Johann von Maedler, a Berlin astronomer, is credited in a 1932 German history of photography as having used it in an article published on 25 February 1839 in the German newspaper Vossische Zeitung.[6] Both of these claims are now widely reported but apparently neither has ever been independently confirmed as beyond reasonable doubt. Credit has traditionally been given to Sir John Herschel both for coining the word and for introducing it to the public. His uses of it in private correspondence prior to 25 February 1839 and at his Royal Society lecture on the subject in London on 14 March 1839 have long been amply documented and accepted as settled fact.

 

History and evolution

Precursor technologies

Photography is the result of combining several technical discoveries. Long before the first photographs were made, Chinese philosopher Mo Di and Greek mathematicians Aristotle and Euclid described a pinhole camera in the 5th and 4th centuries BCE.[8][9] In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments,[10] Ibn al-Haytham (Alhazen) (965–1040) studied the camera obscura and pinhole camera,[9][11] Albertus Magnus (1193–1280) discovered silver nitrate,[12] and Georg Fabricius (1516–71) discovered silver chloride.[13] Techniques described in the Book of Optics are capable of producing primitive photographs using medieval materials. [14][15][16]

 

Daniele Barbaro described a diaphragm in 1566.[17] Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694.[18] The fiction book Giphantie, published in 1760, by French author Tiphaigne de la Roche, described what can be interpreted as photography.[17]

 

The discovery of the camera obscura that provides an image of a scene dates back to ancient China. Leonardo da Vinci mentions natural cameras obscura that are formed by dark caves on the edge of a sunlit valley. A hole in the cave wall will act as a pinhole camera and project a laterally reversed, upside down image on a piece of paper. So the birth of photography was primarily concerned with developing a means to fix and retain the image produced by the camera obscura.

 

The first success of reproducing images without a camera occurred when Thomas Wedgwood, from the famous family of potters, obtained copies of paintings on leather using silver salts. Since he had no way of permanently fixing those reproductions (stabilizing the image by washing out the non-exposed silver salts), they would turn completely black in the light and thus had to be kept in a dark room for viewing.

 

Renaissance painters used the camera obscura which, in fact, gives the optical rendering in color that dominates Western Art. The camera obscura literally means "dark chamber" in Latin. It is a box with a hole in it which allows light to go through and create an image onto the piece of paper.

 

First camera photography (1820s)

Invented in the early decades of the 19th century, photography by means of the camera seemed able to capture more detail and information than traditional media, such as painting and sculpture.[19] Photography as a usable process goes back to the 1820s with the development of chemical photography. The first permanent photoetching was an image produced in 1822 by the French inventor Nicéphore Niépce, but it was destroyed in a later attempt to make prints from it.[7] Niépce was successful again in 1825. He made the View from the Window at Le Gras, the earliest surviving photograph from nature (i.e., of the image of a real-world scene, as formed in a camera obscura by a lens), in 1826 or 1827.[20]

 

Because Niépce's camera photographs required an extremely long exposure (at least eight hours and probably several days), he sought to greatly improve his bitumen process or replace it with one that was more practical. Working in partnership with Louis Daguerre, he developed a somewhat more sensitive process that produced visually superior results, but it still required a few hours of exposure in the camera. Niépce died in 1833 and Daguerre then redirected the experiments toward the light-sensitive silver halides, which Niépce had abandoned many years earlier because of his inability to make the images he captured with them light-fast and permanent. Daguerre's efforts culminated in what would later be named the daguerreotype process, the essential elements of which were in place in 1837. The required exposure time was measured in minutes instead of hours. Daguerre took the earliest confirmed photograph of a person in 1838 while capturing a view of a Paris street: unlike the other pedestrian and horse-drawn traffic on the busy boulevard, which appears deserted, one man having his boots polished stood sufficiently still throughout the approximately ten-minute-long exposure to be visible. Eventually, France agreed to pay Daguerre a pension for his process in exchange for the right to present his invention to the world as the gift of France, which occurred on 19 August 1839.

Meanwhile, in Brazil, Hercules Florence had already created his own process in 1832, naming it Photographie, and an English inventor, William Fox Talbot, had created another method of making a reasonably light-fast silver process image but had kept his work secret. After reading about Daguerre's invention in January 1839, Talbot published his method and set about improving on it. At first, like other pre-daguerreotype processes, Talbot's paper-based photography typically required hours-long exposures in the camera, but in 1840 he created the calotype process, with exposures comparable to the daguerreotype. In both its original and calotype forms, Talbot's process, unlike Daguerre's, created a translucent negative which could be used to print multiple positive copies, the basis of most chemical photography up to the present day. Daguerreotypes could only be replicated by rephotographing them with a camera.[21] Talbot's famous tiny paper negative of the Oriel window in Lacock Abbey, one of a number of camera photographs he made in the summer of 1835, may be the oldest camera negative in existence.[22][23]

 

John Herschel made many contributions to the new field. He invented the cyanotype process, later familiar as the "blueprint". He was the first to use the terms "photography", "negative" and "positive". He had discovered in 1819 that sodium thiosulphate was a solvent of silver halides, and in 1839 he informed Talbot (and, indirectly, Daguerre) that it could be used to "fix" silver-halide-based photographs and make them completely light-fast. He made the first glass negative in late 1839.

 

In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper.

 

Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1884, George Eastman developed an early type of film to replace photographic plates, leading to the technology used by film cameras today.

 

In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize for Physics in 1908.

 

Black-and-white

See also: Monochrome photography

All photography was originally monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost and its "classic" photographic look. The tones and contrast between light and dark shadows define black and white photography.[24] It is important to note that some monochromatic pictures are not always pure blacks and whites, but also contain other hues depending on the process. The cyanotype process produces an image composed of blue tones. The albumen process, first used more than 150 years ago, produces brown tones.

 

Many photographers continue to produce some monochrome images, often because of the established archival permanence of well processed silver halide based materials. Some full color digital images are processed using a variety of techniques to create black and whites, and some manufacturers produce digital cameras that exclusively shoot monochrome.

 

Color

Color photography was explored beginning in the mid-19th century. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light.

 

The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by physicist James Clerk Maxwell in 1855. Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image.

 

Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s.

 

Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images.

 

The development of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability.

 

Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s.

 

Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multilayer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure.

 

Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently available color films still employ a multilayer emulsion and the same principles, most closely resembling Agfa's product.

 

Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963.

 

Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment.

 

Digital photography

Main article: Digital photography

See also: Digital camera and Digital versus film photography

In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born.

 

Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. [25] An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications.

  

Photography gained the interest of many scientists and artists from its inception. Scientists have used photography to record and study movements, such as Eadweard Muybridge's study of human and animal locomotion in 1887. Artists are equally interested by these aspects but also try to explore avenues other than the photo-mechanical representation of reality, such as the pictorialist movement.

 

Military, police, and security forces use photography for surveillance, recognition and data storage. Photography is used by amateurs to preserve memories, to capture special moments, to tell stories, to send messages, and as a source of entertainment. High speed photography allows for visualizing events that are too fast for the human eye.

 

Technical aspects

Main article: Camera

The camera is the image-forming device, and photographic film or a silicon electronic image sensor is the sensing medium. The respective recording medium can be the film itself, or a digital electronic or magnetic memory.[26]

 

Photographers control the camera and lens to "expose" the light recording material (such as film) to the required amount of light to form a "latent image" (on film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on paper or film.

 

The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera).

 

As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens.

 

The movie camera is a type of photographic camera which takes a rapid sequence of photographs on strips of film. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person's eyes and brain merge the separate pictures together to create the illusion of motion.[27]

 

Camera controls are interrelated. The total amount of light reaching the film plane (the 'exposure') changes with the duration of exposure, aperture of the lens, and on the effective focal length of the lens (which in variable focal length lenses, can force a change in aperture as the lens is zoomed). Changing any of these controls can alter the exposure. Many cameras may be set to adjust most or all of these controls automatically. This automatic functionality is useful for occasional photographers in many situations.

 

The duration of an exposure is referred to as shutter speed, often even in cameras that do not have a physical shutter, and is typically measured in fractions of a second. It is quite possible to have exposures from one up to several seconds, usually for still-life subjects, and for night scenes exposure times can be several hours. However, for a subject that is in motion use a fast shutter speed. This will prevent the photograph from coming out blurry.[29]

 

The effective aperture is expressed by an f-number or f-stop (derived from focal ratio), which is proportional to the ratio of the focal length to the diameter of the aperture. Longer lenses will pass less light even though the diameter of the aperture is the same due to the greater distance the light has to travel; shorter lenses (a shorter focal length) will be brighter with the same size of aperture.

 

The smaller the f/number, the larger the effective aperture. The present system of f/numbers to give the effective aperture of a lens was standardized by an international convention. There were earlier, different series of numbers in older cameras.

 

If the f-number is decreased by a factor of √2, the aperture diameter is increased by the same factor, and its area is increased by a factor of 2. The f-stops that might be found on a typical lens include 2.8, 4, 5.6, 8, 11, 16, 22, 32, where going up "one stop" (using lower f-stop numbers) doubles the amount of light reaching the film, and stopping down one stop halves the amount of light.

 

Image capture can be achieved through various combinations of shutter speed, aperture, and film or sensor speed. Different (but related) settings of aperture and shutter speed enable photographs to be taken under various conditions of film or sensor speed, lighting and motion of subjects and/or camera, and desired depth of field. A slower speed film will exhibit less "grain", and a slower speed setting on an electronic sensor will exhibit less "noise", while higher film and sensor speeds allow for a faster shutter speed, which reduces motion blur or allows the use of a smaller aperture to increase the depth of field.

 

For example, a wider aperture is used for lower light and a lower aperture for more light. If a subject is in motion, then a high shutter speed may be needed. A tripod can also be helpful in that it enables a slower shutter speed to be used.

 

For example, f/8 at 8 ms (1/125 of a second) and f/5.6 at 4 ms (1/250 of a second) yield the same amount of light. The chosen combination has an impact on the final result. The aperture and focal length of the lens determine the depth of field, which refers to the range of distances from the lens that will be in focus. A longer lens or a wider aperture will result in "shallow" depth of field (i.e. only a small plane of the image will be in sharp focus). This is often useful for isolating subjects from backgrounds as in individual portraits or macro photography.

 

Conversely, a shorter lens, or a smaller aperture, will result in more of the image being in focus. This is generally more desirable when photographing landscapes or groups of people. With very small apertures, such as pinholes, a wide range of distance can be brought into focus, but sharpness is severely degraded by diffraction with such small apertures. Generally, the highest degree of "sharpness" is achieved at an aperture near the middle of a lens's range (for example, f/8 for a lens with available apertures of f/2.8 to f/16). However, as lens technology improves, lenses are becoming capable of making increasingly sharp images at wider apertures.

 

Image capture is only part of the image forming process. Regardless of material, some process must be employed to render the latent image captured by the camera into a viewable image. With slide film, the developed film is just mounted for projection. Print film requires the developed film negative to be printed onto photographic paper or transparency. Digital images may be uploaded to an image server (e.g., a photo-sharing web site), viewed on a television, or transferred to a computer or digital photo frame. Every type can be printed on more "classical" mediums such as regular paper or photographic paper for examples.

 

Prior to the rendering of a viewable image, modifications can be made using several controls. Many of these controls are similar to controls during image capture, while some are exclusive to the rendering process. Most printing controls have equivalent digital concepts, but some create different effects. For example, dodging and burning controls are different between digital and film processes. Other printing modifications include:

Digital point-and-shoot cameras have become widespread consumer products, outselling film cameras, and including new features such as video and audio recording. Kodak announced in January 2004 that it would no longer sell reloadable 35 mm cameras in western Europe, Canada and the United States after the end of that year. Kodak was at that time a minor player in the reloadable film cameras market. In January 2006, Nikon followed suit and announced that they will stop the production of all but two models of their film cameras: the low-end Nikon FM10, and the high-end Nikon F6. On 25 May 2006, Canon announced they will stop developing new film SLR cameras.[34] Though most new camera designs are now digital, a new 6x6cm/6x7cm medium format film camera was introduced in 2008 in a cooperation between Fuji and Voigtländer.[35][36]

 

According to a survey made by Kodak in 2007 when the majority of photography was already digital, 75 percent of professional photographers say they will continue to use film, even though some embrace digital.[37]

 

The PMA say that in the year 2000 nearly a billion rolls of film were sold each year and by 2011 a mere 20 million rolls, plus 31 million single-use cameras.[38]

 

Quelle:

en.wikipedia.org/wiki/Photography

de.wikipedia.org/wiki/Fotografie

   

High-dynamic-range imaging (HDRI or HDR) is a set of techniques used in imaging and photography to reproduce a greater dynamic range of luminosity than possible using standard digital imaging or photographic techniques. HDR images can represent more accurately the range of intensity levels found in real scenes, from direct sunlight to faint starlight, and is often captured by way of a plurality of differently exposed pictures of the same subject matter.[1][2][3][4]

 

Non-HDR cameras take photographs with a limited exposure range, resulting in the loss of detail in bright or dark areas. HDR compensates for this loss of detail by capturing multiple photographs at different exposure levels and combining them to produce a photograph representative of a broader tonal range.

 

The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR)[5] or standard-dynamic-range (SDR)[6] photographs. HDR images can also be acquired using special image sensors, like oversampled binary image sensor. Tone mapping methods, which reduce overall contrast to facilitate display of HDR images on devices with lower dynamic range, can be applied to produce images with preserved or exaggerated local contrast for artistic effect.

In photography, dynamic range is measured in EV differences (known as stops) between the brightest and darkest parts of the image that show detail. An increase of one EV or one stop is a doubling of the amount of light. Compare that, for example, 210=1024:

High-dynamic-range photographs are generally achieved by capturing multiple standard photographs, often using exposure bracketing, and then merging them into an HDR image. Digital photographs are often encoded in a camera's raw image format, because 8 bit JPEG encoding doesn't offer enough values to allow fine transitions (and introduces undesirable effects due to the lossy compression).

 

The images from any camera that allows manual exposure control can be used to create HDR images. This includes film cameras, though the images may need to be digitized so they can be processed with software HDR methods.

 

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II.[10] As the popularity of this imaging method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7 DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file.[11] The Canon PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.[12] Even some smartphones now include HDR modes, and most platforms have apps that provide HDR picture taking.[13]

 

Color film negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range.[14]

 

Camera characteristics

Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and spectral calibration affect resulting high-dynamic-range images.[15][15]

 

Tone mapping

Main article: Tone mapping

Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast.

 

Software

Several software applications are available on the PC, Mac and Linux platforms for producing HDR files and tone mapped images. Notable titles include

Adobe Photoshop

Dynamic Photo HDR

HDR PhotoStudio

Luminance HDR

Oloneo PhotoEngine

Photomatix Pro

PTGui

Comparison with traditional digital images

Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors that should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.[16][17][18]

 

HDR images often don't use fixed ranges per color channel—other than for traditional images—to represent many more colors over a much wider dynamic range. For that purpose, they don't use integer values to represent the single color channels (e.g.m, 0..255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.

The idea of using several exposures to fix a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, the luminosity range being too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.[20]

 

Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This is effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.[22]

 

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which features dodging and burning prominently, in the context of his Zone System.

 

With the advent of color photography, tone mapping in the darkroom was no longer possible, due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response over the years, or shot in black and white to use tone mapping methods.

Film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force".[23] This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color.[24] The dynamic range of this extended range film has been estimated as 1:108.[25] It has been used to photograph nuclear explosions,[26] for astronomical photography,[27] for spectrographic research,[28] and for medical imaging.[29] Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.

 

Late-twentieth century[edit]

The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.[30] In 1993 the first commercial medical camera was introduced that performed real time capturing of multiple images with different exposures, and producing an HDR video image, by the same group.[31]

 

Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping this result. Global HDR was first introduced in 1993[1] resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.[2]

 

The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory.[32] Mann's method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.[32]

 

In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping.

 

While custom high-dynamic-range digital video solutions had been developed for industrial manufacturing during the 1980s, it was not until the early 2000s that several scholarly research efforts used consumer-grade sensors and cameras.[34] A few companies such as RED[35] and Arri[36] have been developing digital sensors capable of a higher dynamic range. RED EPIC-X can capture HDRx images with a user selectable 1-3 stops of additional highlight latitude in the 'x' channel. The 'x' channel can be merged with the normal channel in post production software. With the advent of low-cost consumer digital cameras, many amateurs began posting tone mapped HDR time-lapse videos on the Internet, essentially a sequence of still photographs in quick succession. In 2010 the independent studio Soviet Montage produced an example of HDR video from disparately exposed video streams using a beam splitter and consumer grade HD video cameras.[37] Similar methods have been described in the academic literature in 2001[38] and 2007.[39]

 

Modern movies have often been filmed with cameras featuring a higher dynamic range, and legacy movies can be upgraded even if manual intervention would be needed for some frames (as this happened in the past with black&white films’ upgrade to color). Also, special effects, especially those in which real and synthetic footage are seamlessly mixed, require both HDR shooting and rendering. HDR video is also needed in all applications in which capturing temporal aspects of changes in the scene demands high accuracy. This is especially important in monitoring of some industrial processes such as welding, predictive driver assistance systems in automotive industry, surveillance systems, to name just a few possible applications. HDR video can be also considered to speed up the image acquisition in all applications, in which a large number of static HDR images are needed, as for example in image-based methods in computer graphics. Finally, with the spread of TV sets with enhanced dynamic range, broadcasting HDR video may become important, but may take a long time to occur due to standardization issues. For this particular application, enhancing current low-dynamic range rendering (LDR) video signal to HDR by intelligent TV sets seems to be a more viable near-term solution.

 

More and more CMOS image sensors now have high dynamic range capability within the pixels themselves. Such pixels are intrinsically non-linear (by design) so that the wide dynamic range of the scene is non-linearly compressed into a smaller dynamic range electronic representation inside the pixel.[41] Such sensors are used in extreme dynamic range applications like welding or automotive.

 

Some other sensors designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. Some of the sensor may even combine the two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.

 

Quelle:

 

en.wikipedia.org/wiki/High-dynamic-range_imaging

 

de.wikipedia.org/wiki/High_Dynamic_Range_Image

  

Photography (see section below for etymology) is the art, science and practice of creating durable images by recording light or other electromagnetic radiation, either chemically by means of a light-sensitive material such as photographic film, or electronically by means of an image sensor.[1] Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. The result in an electronic image sensor is an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing.

 

The result in a photographic emulsion is an invisible latent image, which is later chemically developed into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.

 

Photography has many uses for business, science, manufacturing (e.g. photolithography), art, recreational purposes, and mass communication.

 

The word "photography" was created from the Greek roots φωτός (phōtos), genitive of φῶς (phōs), "light"[2] and γραφή (graphé) "representation by means of lines" or "drawing",[3] together meaning "drawing with light".[4]

 

Several people may have coined the same new term from these roots independently. Hercules Florence, a French painter and inventor living in Campinas, Brazil, used the French form of the word, photographie, in private notes which a Brazilian photography historian believes were written in 1834.[5] Johann von Maedler, a Berlin astronomer, is credited in a 1932 German history of photography as having used it in an article published on 25 February 1839 in the German newspaper Vossische Zeitung.[6] Both of these claims are now widely reported but apparently neither has ever been independently confirmed as beyond reasonable doubt. Credit has traditionally been given to Sir John Herschel both for coining the word and for introducing it to the public. His uses of it in private correspondence prior to 25 February 1839 and at his Royal Society lecture on the subject in London on 14 March 1839 have long been amply documented and accepted as settled fact.

 

History and evolution

Precursor technologies

Photography is the result of combining several technical discoveries. Long before the first photographs were made, Chinese philosopher Mo Di and Greek mathematicians Aristotle and Euclid described a pinhole camera in the 5th and 4th centuries BCE.[8][9] In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments,[10] Ibn al-Haytham (Alhazen) (965–1040) studied the camera obscura and pinhole camera,[9][11] Albertus Magnus (1193–1280) discovered silver nitrate,[12] and Georg Fabricius (1516–71) discovered silver chloride.[13] Techniques described in the Book of Optics are capable of producing primitive photographs using medieval materials. [14][15][16]

 

Daniele Barbaro described a diaphragm in 1566.[17] Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694.[18] The fiction book Giphantie, published in 1760, by French author Tiphaigne de la Roche, described what can be interpreted as photography.[17]

 

The discovery of the camera obscura that provides an image of a scene dates back to ancient China. Leonardo da Vinci mentions natural cameras obscura that are formed by dark caves on the edge of a sunlit valley. A hole in the cave wall will act as a pinhole camera and project a laterally reversed, upside down image on a piece of paper. So the birth of photography was primarily concerned with developing a means to fix and retain the image produced by the camera obscura.

 

The first success of reproducing images without a camera occurred when Thomas Wedgwood, from the famous family of potters, obtained copies of paintings on leather using silver salts. Since he had no way of permanently fixing those reproductions (stabilizing the image by washing out the non-exposed silver salts), they would turn completely black in the light and thus had to be kept in a dark room for viewing.

 

Renaissance painters used the camera obscura which, in fact, gives the optical rendering in color that dominates Western Art. The camera obscura literally means "dark chamber" in Latin. It is a box with a hole in it which allows light to go through and create an image onto the piece of paper.

 

First camera photography (1820s)

Invented in the early decades of the 19th century, photography by means of the camera seemed able to capture more detail and information than traditional media, such as painting and sculpture.[19] Photography as a usable process goes back to the 1820s with the development of chemical photography. The first permanent photoetching was an image produced in 1822 by the French inventor Nicéphore Niépce, but it was destroyed in a later attempt to make prints from it.[7] Niépce was successful again in 1825. He made the View from the Window at Le Gras, the earliest surviving photograph from nature (i.e., of the image of a real-world scene, as formed in a camera obscura by a lens), in 1826 or 1827.[20]

 

Because Niépce's camera photographs required an extremely long exposure (at least eight hours and probably several days), he sought to greatly improve his bitumen process or replace it with one that was more practical. Working in partnership with Louis Daguerre, he developed a somewhat more sensitive process that produced visually superior results, but it still required a few hours of exposure in the camera. Niépce died in 1833 and Daguerre then redirected the experiments toward the light-sensitive silver halides, which Niépce had abandoned many years earlier because of his inability to make the images he captured with them light-fast and permanent. Daguerre's efforts culminated in what would later be named the daguerreotype process, the essential elements of which were in place in 1837. The required exposure time was measured in minutes instead of hours. Daguerre took the earliest confirmed photograph of a person in 1838 while capturing a view of a Paris street: unlike the other pedestrian and horse-drawn traffic on the busy boulevard, which appears deserted, one man having his boots polished stood sufficiently still throughout the approximately ten-minute-long exposure to be visible. Eventually, France agreed to pay Daguerre a pension for his process in exchange for the right to present his invention to the world as the gift of France, which occurred on 19 August 1839.

Meanwhile, in Brazil, Hercules Florence had already created his own process in 1832, naming it Photographie, and an English inventor, William Fox Talbot, had created another method of making a reasonably light-fast silver process image but had kept his work secret. After reading about Daguerre's invention in January 1839, Talbot published his method and set about improving on it. At first, like other pre-daguerreotype processes, Talbot's paper-based photography typically required hours-long exposures in the camera, but in 1840 he created the calotype process, with exposures comparable to the daguerreotype. In both its original and calotype forms, Talbot's process, unlike Daguerre's, created a translucent negative which could be used to print multiple positive copies, the basis of most chemical photography up to the present day. Daguerreotypes could only be replicated by rephotographing them with a camera.[21] Talbot's famous tiny paper negative of the Oriel window in Lacock Abbey, one of a number of camera photographs he made in the summer of 1835, may be the oldest camera negative in existence.[22][23]

 

John Herschel made many contributions to the new field. He invented the cyanotype process, later familiar as the "blueprint". He was the first to use the terms "photography", "negative" and "positive". He had discovered in 1819 that sodium thiosulphate was a solvent of silver halides, and in 1839 he informed Talbot (and, indirectly, Daguerre) that it could be used to "fix" silver-halide-based photographs and make them completely light-fast. He made the first glass negative in late 1839.

 

In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper.

 

Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1884, George Eastman developed an early type of film to replace photographic plates, leading to the technology used by film cameras today.

 

In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize for Physics in 1908.

 

Black-and-white

See also: Monochrome photography

All photography was originally monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost and its "classic" photographic look. The tones and contrast between light and dark shadows define black and white photography.[24] It is important to note that some monochromatic pictures are not always pure blacks and whites, but also contain other hues depending on the process. The cyanotype process produces an image composed of blue tones. The albumen process, first used more than 150 years ago, produces brown tones.

 

Many photographers continue to produce some monochrome images, often because of the established archival permanence of well processed silver halide based materials. Some full color digital images are processed using a variety of techniques to create black and whites, and some manufacturers produce digital cameras that exclusively shoot monochrome.

 

Color

Color photography was explored beginning in the mid-19th century. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light.

 

The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by physicist James Clerk Maxwell in 1855. Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image.

 

Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s.

 

Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images.

 

The development of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability.

 

Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s.

 

Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multilayer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure.

 

Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently available color films still employ a multilayer emulsion and the same principles, most closely resembling Agfa's product.

 

Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963.

 

Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment.

 

Digital photography

Main article: Digital photography

See also: Digital camera and Digital versus film photography

In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born.

 

Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. [25] An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications.

  

Photography gained the interest of many scientists and artists from its inception. Scientists have used photography to record and study movements, such as Eadweard Muybridge's study of human and animal locomotion in 1887. Artists are equally interested by these aspects but also try to explore avenues other than the photo-mechanical representation of reality, such as the pictorialist movement.

 

Military, police, and security forces use photography for surveillance, recognition and data storage. Photography is used by amateurs to preserve memories, to capture special moments, to tell stories, to send messages, and as a source of entertainment. High speed photography allows for visualizing events that are too fast for the human eye.

 

Technical aspects

Main article: Camera

The camera is the image-forming device, and photographic film or a silicon electronic image sensor is the sensing medium. The respective recording medium can be the film itself, or a digital electronic or magnetic memory.[26]

 

Photographers control the camera and lens to "expose" the light recording material (such as film) to the required amount of light to form a "latent image" (on film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on paper or film.

 

The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera).

 

As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens.

 

The movie camera is a type of photographic camera which takes a rapid sequence of photographs on strips of film. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person's eyes and brain merge the separate pictures together to create the illusion of motion.[27]

 

Camera controls are interrelated. The total amount of light reaching the film plane (the 'exposure') changes with the duration of exposure, aperture of the lens, and on the effective focal length of the lens (which in variable focal length lenses, can force a change in aperture as the lens is zoomed). Changing any of these controls can alter the exposure. Many cameras may be set to adjust most or all of these controls automatically. This automatic functionality is useful for occasional photographers in many situations.

 

The duration of an exposure is referred to as shutter speed, often even in cameras that do not have a physical shutter, and is typically measured in fractions of a second. It is quite possible to have exposures from one up to several seconds, usually for still-life subjects, and for night scenes exposure times can be several hours. However, for a subject that is in motion use a fast shutter speed. This will prevent the photograph from coming out blurry.[29]

 

The effective aperture is expressed by an f-number or f-stop (derived from focal ratio), which is proportional to the ratio of the focal length to the diameter of the aperture. Longer lenses will pass less light even though the diameter of the aperture is the same due to the greater distance the light has to travel; shorter lenses (a shorter focal length) will be brighter with the same size of aperture.

 

The smaller the f/number, the larger the effective aperture. The present system of f/numbers to give the effective aperture of a lens was standardized by an international convention. There were earlier, different series of numbers in older cameras.

 

If the f-number is decreased by a factor of √2, the aperture diameter is increased by the same factor, and its area is increased by a factor of 2. The f-stops that might be found on a typical lens include 2.8, 4, 5.6, 8, 11, 16, 22, 32, where going up "one stop" (using lower f-stop numbers) doubles the amount of light reaching the film, and stopping down one stop halves the amount of light.

 

Image capture can be achieved through various combinations of shutter speed, aperture, and film or sensor speed. Different (but related) settings of aperture and shutter speed enable photographs to be taken under various conditions of film or sensor speed, lighting and motion of subjects and/or camera, and desired depth of field. A slower speed film will exhibit less "grain", and a slower speed setting on an electronic sensor will exhibit less "noise", while higher film and sensor speeds allow for a faster shutter speed, which reduces motion blur or allows the use of a smaller aperture to increase the depth of field.

 

For example, a wider aperture is used for lower light and a lower aperture for more light. If a subject is in motion, then a high shutter speed may be needed. A tripod can also be helpful in that it enables a slower shutter speed to be used.

 

For example, f/8 at 8 ms (1/125 of a second) and f/5.6 at 4 ms (1/250 of a second) yield the same amount of light. The chosen combination has an impact on the final result. The aperture and focal length of the lens determine the depth of field, which refers to the range of distances from the lens that will be in focus. A longer lens or a wider aperture will result in "shallow" depth of field (i.e. only a small plane of the image will be in sharp focus). This is often useful for isolating subjects from backgrounds as in individual portraits or macro photography.

 

Conversely, a shorter lens, or a smaller aperture, will result in more of the image being in focus. This is generally more desirable when photographing landscapes or groups of people. With very small apertures, such as pinholes, a wide range of distance can be brought into focus, but sharpness is severely degraded by diffraction with such small apertures. Generally, the highest degree of "sharpness" is achieved at an aperture near the middle of a lens's range (for example, f/8 for a lens with available apertures of f/2.8 to f/16). However, as lens technology improves, lenses are becoming capable of making increasingly sharp images at wider apertures.

 

Image capture is only part of the image forming process. Regardless of material, some process must be employed to render the latent image captured by the camera into a viewable image. With slide film, the developed film is just mounted for projection. Print film requires the developed film negative to be printed onto photographic paper or transparency. Digital images may be uploaded to an image server (e.g., a photo-sharing web site), viewed on a television, or transferred to a computer or digital photo frame. Every type can be printed on more "classical" mediums such as regular paper or photographic paper for examples.

 

Prior to the rendering of a viewable image, modifications can be made using several controls. Many of these controls are similar to controls during image capture, while some are exclusive to the rendering process. Most printing controls have equivalent digital concepts, but some create different effects. For example, dodging and burning controls are different between digital and film processes. Other printing modifications include:

Digital point-and-shoot cameras have become widespread consumer products, outselling film cameras, and including new features such as video and audio recording. Kodak announced in January 2004 that it would no longer sell reloadable 35 mm cameras in western Europe, Canada and the United States after the end of that year. Kodak was at that time a minor player in the reloadable film cameras market. In January 2006, Nikon followed suit and announced that they will stop the production of all but two models of their film cameras: the low-end Nikon FM10, and the high-end Nikon F6. On 25 May 2006, Canon announced they will stop developing new film SLR cameras.[34] Though most new camera designs are now digital, a new 6x6cm/6x7cm medium format film camera was introduced in 2008 in a cooperation between Fuji and Voigtländer.[35][36]

 

According to a survey made by Kodak in 2007 when the majority of photography was already digital, 75 percent of professional photographers say they will continue to use film, even though some embrace digital.[37]

 

The PMA say that in the year 2000 nearly a billion rolls of film were sold each year and by 2011 a mere 20 million rolls, plus 31 million single-use cameras.[38]

 

Quelle:

en.wikipedia.org/wiki/Photography

de.wikipedia.org/wiki/Fotografie

  

Fotografie oder Photographie (aus griechisch φῶς, phos, im Genitiv: φωτός, photos, „Licht (der Himmelskörper)“, „Helligkeit“ und γράφειν, graphein, „zeichnen“, „ritzen“, „malen“, „schreiben“) bezeichnet

  

eine bildgebende Methode,[1] bei der mit Hilfe von optischen Verfahren ein Lichtbild auf ein lichtempfindliches Medium projiziert und dort direkt und dauerhaft gespeichert (analoges Verfahren) oder in elektronische Daten gewandelt und gespeichert wird (digitales Verfahren).

das dauerhafte Lichtbild (Diapositiv, Filmbild oder Papierbild; kurz Bild, umgangssprachlich auch Foto genannt), das durch fotografische Verfahren hergestellt wird; dabei kann es sich entweder um ein Positiv oder ein Negativ auf Film, Folie, Papier oder anderen fotografischen Trägern handeln. Fotografische Aufnahmen werden als Abzug, Vergrößerung, Filmkopie oder als Ausbelichtung bzw. Druck von digitalen Bild-Dateien vervielfältigt. Der entsprechende Beruf ist der Fotograf.

Bilder, die für das Kino aufgenommen werden. Beliebig viele fotografische Bilder werden in Reihen von Einzelbildern auf Film aufgenommen, die später mit einem Filmprojektor als bewegte Bilder (Laufbilder) vorgeführt werden können (siehe Film).

  

Der Begriff Photographie wurde erstmals (noch vor englischen oder französischen Veröffentlichungen) am 25. Februar 1839 vom Astronomen Johann Heinrich von Mädler in der Vossischen Zeitung verwendet.[2] Bis ins 20. Jahrhundert bezeichnete Fotografie alle Bilder, welche rein durch Licht auf einer chemisch behandelten Oberfläche entstehen. Mit der deutschen Rechtschreibreform 1901 wurde die Schreibweise „Fotografie“ empfohlen, was sich jedoch bis heute nicht ganz durchsetzen konnte. Gemischte Schreibungen wie „Fotographie“ oder „Photografie“ sowie daraus abgewandelte Adjektive oder Substantive waren jedoch zu jeder Zeit eine falsche Schreibweise.

  

Allgemeines

Die Fotografie ist ein Medium, das in sehr verschiedenen Zusammenhängen eingesetzt wird. Fotografische Abbildungen können beispielsweise Gegenstände mit primär künstlerischem (künstlerische Fotografie) oder primär kommerziellem Charakter sein (Industriefotografie, Werbe- und Modefotografie). Die Fotografie kann unter künstlerischen, technischen (Fototechnik), ökonomischen (Fotowirtschaft) und gesellschaftlich-sozialen (Amateur-, Arbeiter- und Dokumentarfotografie) Aspekten betrachtet werden. Des Weiteren werden Fotografien im Journalismus und in der Medizin verwendet.

  

Die Fotografie ist teilweise ein Gegenstand der Forschung und Lehre in der Kunstgeschichte und der noch jungen Bildwissenschaft. Der mögliche Kunstcharakter der Fotografie war lange Zeit umstritten, ist jedoch seit der fotografischen Stilrichtung des Pictorialismus um die Wende zum 20. Jahrhundert letztlich nicht mehr bestritten. Einige Forschungsrichtungen ordnen die Fotografie der Medien- oder Kommunikationswissenschaft zu, auch diese Zuordnung ist umstritten.

  

Im Zuge der technologischen Weiterentwicklung fand zu Beginn des 21. Jahrhunderts allmählich der Wandel von der klassischen analogen (Silber-)Fotografie hin zur Digitalfotografie statt. Der weltweite Zusammenbruch der damit in Zusammenhang stehenden Industrie für analoge Kameras aber auch für Verbrauchsmaterialien (Filme, Fotopapier, Fotochemie, Laborgeräte) führt dazu, dass die Fotografie mehr und mehr auch unter kulturwissenschaftlicher und kulturhistorischer Sicht erforscht wird. Allgemein kulturelle Aspekte in der Forschung sind z.B. Betrachtungen über den Erhalt und die Dokumentation der praktischen Kenntnis der fotografischen Verfahren für Aufnahme und Verarbeitung aber auch der Wandel im Umgang mit der Fotografie im Alltag. Zunehmend kulturhistorisch interessant werden die Archivierungs- und Erhaltungstechniken für analoge Aufnahmen aber auch die systemunabhängige langfristige digitale Datenspeicherung.

  

Die Fotografie unterliegt dem komplexen und vielschichtigen Fotorecht; bei der Nutzung von vorhandenen Fotografien sind die Bildrechte zu beachten.

  

Fototechnik

Prinzipiell wird meist mit Hilfe eines optischen Systems, in vielen Fällen einem Objektiv, fotografiert. Dieses wirft das von einem Objekt ausgesendete oder reflektierte Licht auf die lichtempfindliche Schicht einer Fotoplatte, eines Films oder auf einen fotoelektrischen Wandler, einen Bildsensor.

  

→ Hauptartikel: Fototechnik

Fotografische Kameras

→ Hauptartikel: Kamera

Der fotografischen Aufnahme dient eine fotografische Apparatur (Kamera). Durch Manipulation des optischen Systems (unter anderem die Einstellung der Blende, Scharfstellung, Farbfilterung, die Wahl der Belichtungszeit, der Objektivbrennweite, der Beleuchtung und nicht zuletzt des Aufnahmematerials) stehen dem Fotografen oder Kameramann zahlreiche Gestaltungsmöglichkeiten offen. Als vielseitigste Fotoapparatbauform hat sich sowohl im Analog- als auch im Digitalbereich die Spiegelreflexkamera durchgesetzt. Für viele Aufgaben werden weiterhin die verschiedensten Spezialkameras benötigt und eingesetzt.

  

Lichtempfindliche Schicht

Bei der filmbasierten Fotografie (z. B. Silber-Fotografie) ist die lichtempfindliche Schicht auf der Bildebene eine Dispersion (im allgemeinen Sprachgebrauch Emulsion). Sie besteht aus einem Gel, in dem gleichmäßig kleine Körnchen eines Silberhalogenids (zum Beispiel Silberbromid) verteilt sind. Je kleiner die Körnung ist, umso weniger lichtempfindlich ist die Schicht (siehe ISO-5800-Standard), umso besser ist allerdings die Auflösung („Korn“). Dieser lichtempfindlichen Schicht wird durch einen Träger Stabilität verliehen. Trägermaterialien sind Zelluloseacetat, früher diente dazu Zellulosenitrat (Zelluloid), Kunststofffolien, Metallplatten, Glasplatten und sogar Textilien (siehe Fotoplatte und Film).

  

Bei der Digitalfotografie besteht das Äquivalent der lichtempfindlichen Schicht aus Chips wie CCD- oder CMOS-Sensoren.

  

Entwicklung und Fixierung

Durch das Entwickeln bei der filmbasierten Fotografie wird auf chemischem Wege das latente Bild sichtbar gemacht. Beim Fixieren werden die nicht belichteten Silberhalogenid-Körnchen wasserlöslich gemacht und anschließend mit Wasser herausgewaschen, sodass ein Bild bei Tageslicht betrachtet werden kann, ohne dass es nachdunkelt.

  

Ein weiteres älteres Verfahren ist das Staubverfahren, mit dem sich einbrennbare Bilder auf Glas und Porzellan herstellen lassen.

  

Ein digitales Bild muss nicht entwickelt werden; es wird elektronisch gespeichert und kann anschließend mit der elektronischen Bildbearbeitung am Computer bearbeitet und bei Bedarf auf Fotopapier ausbelichtet oder beispielsweise mit einem Tintenstrahldrucker ausgedruckt werden. Bei der Weiterverarbeitung von Rohdaten spricht man auch hier von Entwicklung.

  

Der Abzug

Als Abzug bezeichnet man das Ergebnis einer Kontaktkopie, einer Vergrößerung, oder einer Ausbelichtung; dabei entsteht in der Regel ein Papierbild. Abzüge können von Filmen (Negativ oder Dia) oder von Dateien gefertigt werden.

  

Abzüge als Kontaktkopie haben dieselbe Größe wie die Abmessungen des Aufnahmeformats; wird eine Vergrößerung vom Negativ oder Positiv angefertigt, beträgt die Größe des entstehenden Bildes ein Vielfaches der Größe der Vorlage, dabei wird jedoch in der Regel das Seitenverhältnis beibehalten, das bei der klassischen Fotografie bei 1,5 bzw. 3:2 oder in USA 4:5 liegt.

Eine Ausnahme davon stellt die Ausschnittvergrößerung dar, deren Seitenverhältnis in der Bühne eines Vergrößerers beliebig festgelegt werden kann; allerdings wird auch die Ausschnittvergrößerung in der Regel auf ein Papierformat mit bestimmten Abmessungen belichtet.

  

Der Abzug ist eine häufig gewählte Präsentationsform der Amateurfotografie, die in speziellen Kassetten oder Alben gesammelt werden. Bei der Präsentationsform der Diaprojektion arbeitet man in der Regel mit dem Original-Diapositiv, also einem Unikat, während es sich bei Abzügen immer um Kopien handelt.

  

Geschichte der Fotografie

→ Hauptartikel: Geschichte und Entwicklung der Fotografie

Vorläufer und Vorgeschichte[Bearbeiten]

Der Name Kamera leitet sich vom Vorläufer der Fotografie, der Camera obscura („Dunkle Kammer“) ab, die bereits seit dem 11. Jahrhundert bekannt ist und Ende des 13. Jahrhunderts von Astronomen zur Sonnenbeobachtung eingesetzt wurde. Anstelle einer Linse weist diese Kamera nur ein kleines Loch auf, durch das die Lichtstrahlen auf eine Projektionsfläche fallen, von der das auf dem Kopf stehende, seitenverkehrte Bild abgezeichnet werden kann. In Edinburgh und Greenwich bei London sind begehbare, raumgroße Camerae obscurae eine Touristenattraktion. Auch das Deutsche Filmmuseum hat eine Camera obscura, in der ein Bild des gegenüberliegenden Mainufers projiziert wird.

  

Ein Durchbruch ist 1550 die Wiedererfindung der Linse, mit der hellere und gleichzeitig schärfere Bilder erzeugt werden können. 1685: Ablenkspiegel, ein Abbild kann so auf Papier gezeichnet werden.

  

Im 18. Jahrhundert kamen die Laterna magica, das Panorama und das Diorama auf. Chemiker wie Humphry Davy begannen bereits, lichtempfindliche Stoffe zu untersuchen und nach Fixiermitteln zu suchen.

  

Die frühen Verfahren

Die vermutlich erste Fotografie der Welt wurde im Frühherbst 1826 durch Joseph Nicéphore Nièpce im Heliografie-Verfahren angefertigt. 1837 benutzte Louis Jacques Mandé Daguerre ein besseres Verfahren, das auf der Entwicklung der Fotos mit Hilfe von Quecksilber-Dämpfen und anschließender Fixierung in einer heißen Kochsalzlösung oder einer normal temperierten Natriumthiosulfatlösung beruhte. Die auf diese Weise hergestellten Bilder, allesamt Unikate auf versilberten Kupferplatten, wurden als Daguerreotypien bezeichnet. Bereits 1835 hatte der Engländer William Fox Talbot das Negativ-Positiv-Verfahren erfunden. Auch heute werden noch manche der historischen Verfahren als Edeldruckverfahren in der Bildenden Kunst und künstlerischen Fotografie verwendet.

  

Im Jahr 1883 erschien in der bedeutenden Leipziger Wochenzeitschrift Illustrirte Zeitung zum ersten Mal in einer deutschen Publikation ein gerastertes Foto in Form einer Autotypie, einer um 1880 erfolgten Erfindung von Georg Meisenbach.

  

20. Jahrhundert

Fotografien konnten zunächst nur als Unikate hergestellt werden, mit der Einführung des Negativ-Positiv-Verfahrens war eine Vervielfältigung im Kontaktverfahren möglich. Die Größe des fertigen Fotos entsprach in beiden Fällen dem Aufnahmeformat, was sehr große, unhandliche Kameras erforderte. Mit dem Rollfilm und insbesondere der von Oskar Barnack bei den Leitz Werken entwickelten und 1924 eingeführten Kleinbildkamera, die den herkömmlichen 35-mm-Kinofilm verwendete, entstanden völlig neue Möglichkeiten für eine mobile, schnelle Fotografie. Obwohl, durch das kleine Format bedingt, zusätzliche Geräte zur Vergrößerung erforderlich wurden, und die Bildqualität mit den großen Formaten bei Weitem nicht mithalten konnte, setzte sich das Kleinbild in den meisten Bereichen der Fotografie als Standardformat durch.

  

Analogfotografie

→ Hauptartikel: Analogfotografie

Begriff

Zur Abgrenzung gegenüber den neuen fotografischen Verfahren der Digitalfotografie tauchte zu Beginn des 21. Jahrhunderts[3] der Begriff Analogfotografie oder stattdessen auch die zu diesem Zeitpunkt bereits veraltete Schreibweise Photographie wieder auf.

  

Um der Öffentlichkeit ab 1990 die seinerzeit neue Technologie der digitalen Speicherung von Bilddateien zu erklären, verglich man sie in einigen Publikationen technisch mit der bis dahin verwendeten analogen Bildspeicherung der Still-Video-Kamera. Durch Übersetzungsfehler und Fehlinterpretationen, sowie durch den bis dahin noch allgemein vorherrschenden Mangel an technischem Verständnis über die digitale Kameratechnik, bezeichneten einige Journalisten danach irrtümlich auch die bisherigen klassischen Film-basierten Kamerasysteme als Analogkameras[4][5].

  

Der Begriff hat sich bis heute erhalten und bezeichnet nun fälschlich nicht mehr die Fotografie mittels analoger Speichertechnik in den ersten digitalen Still-Video-Kameras, sondern nur noch die Technik der Film-basierten Fotografie. Bei dieser wird aber weder digital noch analog 'gespeichert', sondern chemisch/physikalisch fixiert.

  

Allgemeines

Eine Fotografie kann weder analog noch digital sein. Lediglich die Bildinformation kann punktuell mittels physikalischer, analog messbarer Signale (Densitometrie, Spektroskopie) bestimmt und gegebenenfalls nachträglich digitalisiert werden.

  

Nach der Belichtung des Films liegt die Bildinformation zunächst nur latent vor. Gespeichert wird diese Information nicht in der Analogkamera sondern erst bei der Entwicklung des Films mittels chemischer Reaktion in einer dreidimensionalen Gelatineschicht (Film hat mehrere übereinander liegende Sensibilisierungsschichten). Die Bildinformation liegt danach auf dem ursprünglichen Aufnahmemedium (Diapositiv oder Negativ) unmittelbar vor. Sie ist ohne weitere Hilfsmittel als Fotografie (Unikat) in Form von entwickelten Silberhalogeniden bzw. Farbkupplern sichtbar. Gegebenenfalls kann aus solchen Fotografien in einem zweiten chemischen Prozess im Fotolabor ein Papierbild erzeugt werden, bzw. kann dies nun auch durch Einscannen und Ausdrucken erfolgen.

  

Bei der digitalen Speicherung werden die analogen Signale aus dem Kamerasensor in einer zweiten Stufe digitalisiert und werden damit elektronisch interpretier- und weiterverarbeitbar. Die digitale Bildspeicherung mittels Analog-Digital-Wandler nach Auslesen aus dem Chip der Digitalkamera arbeitet (vereinfacht) mit einer lediglich zweidimensional erzeugten digitalen Interpretation der analogen Bildinformation und erzeugt eine beliebig oft (praktisch verlustfrei) kopierbare Datei in Form von differentiell ermittelten digitalen Absolutwerten. Diese Dateien werden unmittelbar nach der Aufnahme innerhalb der Kamera in Speicherkarten abgelegt. Mittels geeigneter Bildbearbeitungssoftware können diese Dateien danach ausgelesen, weiter verarbeitet und auf einem Monitor oder Drucker als sichtbare Fotografie ausgegeben werden.

  

Digitalfotografie

  

Die erste CCD (Charge-coupled Device) Still-Video-Kamera wurde 1970 von Bell konstruiert und 1972 meldet Texas Instruments das erste Patent auf eine filmlose Kamera an, welche einen Fernsehbildschirm als Sucher verwendet.

  

1973 produzierte Fairchild Imaging das erste kommerzielle CCD mit einer Auflösung von 100 × 100 Pixel.

  

Dieses CCD wurde 1975 in der ersten funktionstüchtigen digitalen Kamera von Kodak benutzt. Entwickelt hat sie der Erfinder Steven Sasson. Diese Kamera wog 3,6 Kilogramm, war größer als ein Toaster und benötigte noch 23 Sekunden, um ein Schwarz-Weiß-Bild mit 100x100 Pixeln Auflösung auf eine digitale Magnetbandkassette zu übertragen; um das Bild auf einem Bildschirm sichtbar zu machen, bedurfte es weiterer 23 Sekunden.

  

1986 stellte Canon mit der RC-701 die erste kommerziell erhältliche Still-Video-Kamera mit magnetischer Aufzeichnung der Bilddaten vor, Minolta präsentierte den Still Video Back SB-90/SB-90S für die Minolta 9000; durch Austausch der Rückwand der Kleinbild-Spiegelreflexkamera wurde aus der Minolta 9000 eine digitale Spiegelreflexkamera; gespeichert wurden die Bilddaten auf 2-Zoll-Disketten.

  

1987 folgten weitere Modelle der RC-Serie von Canon sowie digitale Kameras von Fujifilm (ES-1), Konica (KC-400) und Sony (MVC-A7AF). 1988 folgte Nikon mit der QV-1000C und 1990 sowie 1991 Kodak mit dem DCS (Digital Camera System) sowie Rollei mit dem Digital Scan Pack. Ab Anfang der 1990er Jahre kann die Digitalfotografie im kommerziellen Bildproduktionsbereich als eingeführt betrachtet werden.

  

Die digitale Fotografie revolutionierte die Möglichkeiten der digitalen Kunst, erleichtert insbesondere aber auch Fotomanipulationen.

  

Die Photokina 2006 zeigt, dass die Zeit der filmbasierten Kamera endgültig vorbei ist.[6] Im Jahr 2007 sind weltweit 91 Prozent aller verkauften Fotokameras digital,[7] die herkömmliche Fotografie auf Filmen schrumpft auf Nischenbereiche zusammen. Im Jahr 2011 besaßen rund 45,4 Millionen Personen in Deutschland einen digitalen Fotoapparat im Haushalt und im gleichen Jahr wurden in Deutschland rund 8,57 Millionen Digitalkameras verkauft.[8]

  

Siehe auch: Chronologie der Fotografie und Geschichte und Entwicklung der Fotografie

Fotografie als Kunst

  

Der Kunstcharakter der Fotografie war lange Zeit umstritten; zugespitzt formuliert der Kunsttheoretiker Karl Pawek in seinem Buch „Das optische Zeitalter“ (Olten/Freiburg i. Br. 1963, S. 58): „Der Künstler erschafft die Wirklichkeit, der Fotograf sieht sie.“

  

Diese Auffassung betrachtet die Fotografie nur als ein technisches, standardisiertes Verfahren, mit dem eine Wirklichkeit auf eine objektive, quasi „natürliche“ Weise abgebildet wird, ohne das dabei gestalterische und damit künstlerische Aspekte zum Tragen kommen: „die Erfindung eines Apparates zum Zwecke der Produktion … (perspektivischer) Bilder hat ironischerweise die Überzeugung … verstärkt, dass es sich hierbei um die natürliche Repräsentationsform handele. Offenbar ist etwas natürlich, wenn wir eine Maschine bauen können, die es für uns erledigt.“[9] Fotografien dienten gleichwohl aber schon bald als Unterrichtsmittel bzw. Vorlage in der Ausbildung bildender Künstler (Études d’après nature).

  

Schon in Texten des 19. Jahrhunderts wurde aber auch bereits auf den Kunstcharakter der Fotografie hingewiesen, der mit einem ähnlichen Einsatz der Technik wie bei anderen anerkannten zeitgenössische grafische Verfahren (Aquatinta, Radierung, Lithografie, …) begründet wird. Damit wird auch die Fotografie zu einem künstlerischen Verfahren, mit dem ein Fotograf eigene Bildwirklichkeiten erschafft.[10]

  

Auch zahlreiche Maler des 19. Jahrhunderts, wie etwa Eugène Delacroix, erkannten dies und nutzten Fotografien als Mittel zur Bildfindung und Gestaltung, als künstlerisches Entwurfsinstrument für malerische Werke, allerdings weiterhin ohne ihr einen eigenständigen künstlerischen Wert zuzusprechen.

  

Der Fotograf Henri Cartier-Bresson, selbst als Maler ausgebildet, wollte die Fotografie ebenfalls nicht als Kunstform, sondern als Handwerk betrachtet wissen: „Die Fotografie ist ein Handwerk. Viele wollen daraus eine Kunst machen, aber wir sind einfach Handwerker, die ihre Arbeit gut machen müssen.“ Gleichzeitig nahm er aber für sich auch das Bildfindungskonzept des entscheidenden Augenblickes in Anspruch, das ursprünglich von Gotthold Ephraim Lessing dramenpoetologisch ausgearbeitet wurde. Damit bezieht er sich unmittelbar auf ein künstlerisches Verfahren zur Produktion von Kunstwerken. Cartier-Bressons Argumentation diente also einerseits der poetologischen Nobilitierung, andererseits der handwerklichen Immunisierung gegenüber einer Kritik, die die künstlerische Qualität seiner Werke anzweifeln könnte. So wurden gerade Cartier-Bressons Fotografien sehr früh in Museen und Kunstausstellungen gezeigt, so zum Beispiel in der MoMa-Retrospektive (1947) und der Louvre-Ausstellung (1955).

  

Fotografie wurde bereits früh als Kunst betrieben (Julia Margaret Cameron, Lewis Carroll und Oscar Gustave Rejlander in den 1860ern). Der entscheidende Schritt zur Anerkennung der Fotografie als Kunstform ist den Bemühungen von Alfred Stieglitz (1864–1946) zu verdanken, der mit seinem Magazin Camera Work den Durchbruch vorbereitete.

  

Erstmals trat die Fotografie in Deutschland in der Werkbund-Ausstellung 1929 in Stuttgart in beachtenswertem Umfang mit internationalen Künstlern wie Edward Weston, Imogen Cunningham und Man Ray an die Öffentlichkeit; spätestens seit den MoMA-Ausstellungen von Edward Steichen (The Family of Man, 1955) und John Szarkowski (1960er) ist Fotografie als Kunst von einem breiten Publikum anerkannt, wobei gleichzeitig der Trend zur Gebrauchskunst begann.

  

Im Jahr 1977 stellte die documenta 6 in Kassel erstmals als international bedeutende Ausstellung in der berühmten Abteilung Fotografie die Arbeiten von historischen und zeitgenössischen Fotografen aus der gesamten Geschichte der Fotografie in den vergleichenden Kontext zur zeitgenössischen Kunst im Zusammenhang mit den in diesem Jahr begangenen „150 Jahren Fotografie“.

  

Heute ist Fotografie als vollwertige Kunstform akzeptiert. Indikatoren dafür sind die wachsende Anzahl von Museen, Sammlungen und Forschungseinrichtungen für Fotografie, die Zunahme der Professuren für Fotografie sowie nicht zuletzt der gestiegene Wert von Fotografien in Kunstauktionen und Sammlerkreisen. Zahlreiche Gebiete haben sich entwickelt, so die Landschafts-, Akt-, Industrie-, Theaterfotografie und andere mehr, die innerhalb der Fotografie eigene Wirkungsfelder entfaltet haben. Daneben entwickelt sich die künstlerische Fotomontage zu einem der malenden Kunst gleichwertigen Kunstobjekt. Neben der steigenden Anzahl von Fotoausstellungen und deren Besucherzahlen wird die Popularität moderner Fotografie auch in den erzielten Verkaufspreisen auf Kunstauktionen sichtbar. Fünf der zehn Höchstgebote für moderne Fotografie wurden seit 2010 auf Auktionen erzielt. Die aktuell teuerste Fotografie "Rhein II" von Andreas Gursky wurde im November 2011 auf einer Kunstauktion in New York für 4,3 Millionen Dollar versteigert.[11] Neuere Diskussionen innerhalb der Foto- und Kunstwissenschaften verweisen indes auf eine zunehmende Beliebigkeit bei der Kategorisierung von Fotografie. Zunehmend werde demnach von der Kunst und ihren Institutionen absorbiert, was einst ausschließlich in die angewandten Bereiche der Fotografie gehört habe.

  

Photography (see section below for etymology) is the art, science and practice of creating durable images by recording light or other electromagnetic radiation, either chemically by means of a light-sensitive material such as photographic film, or electronically by means of an image sensor.[1] Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. The result in an electronic image sensor is an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing.

  

The result in a photographic emulsion is an invisible latent image, which is later chemically developed into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.

  

Photography has many uses for business, science, manufacturing (e.g. photolithography), art, recreational purposes, and mass communication.

  

The word "photography" was created from the Greek roots φωτός (phōtos), genitive of φῶς (phōs), "light"[2] and γραφή (graphé) "representation by means of lines" or "drawing",[3] together meaning "drawing with light".[4]

  

Several people may have coined the same new term from these roots independently. Hercules Florence, a French painter and inventor living in Campinas, Brazil, used the French form of the word, photographie, in private notes which a Brazilian photography historian believes were written in 1834.[5] Johann von Maedler, a Berlin astronomer, is credited in a 1932 German history of photography as having used it in an article published on 25 February 1839 in the German newspaper Vossische Zeitung.[6] Both of these claims are now widely reported but apparently neither has ever been independently confirmed as beyond reasonable doubt. Credit has traditionally been given to Sir John Herschel both for coining the word and for introducing it to the public. His uses of it in private correspondence prior to 25 February 1839 and at his Royal Society lecture on the subject in London on 14 March 1839 have long been amply documented and accepted as settled fact.

  

History and evolution

Precursor technologies

Photography is the result of combining several technical discoveries. Long before the first photographs were made, Chinese philosopher Mo Di and Greek mathematicians Aristotle and Euclid described a pinhole camera in the 5th and 4th centuries BCE.[8][9] In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments,[10] Ibn al-Haytham (Alhazen) (965–1040) studied the camera obscura and pinhole camera,[9][11] Albertus Magnus (1193–1280) discovered silver nitrate,[12] and Georg Fabricius (1516–71) discovered silver chloride.[13] Techniques described in the Book of Optics are capable of producing primitive photographs using medieval materials. [14][15][16]

  

Daniele Barbaro described a diaphragm in 1566.[17] Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694.[18] The fiction book Giphantie, published in 1760, by French author Tiphaigne de la Roche, described what can be interpreted as photography.[17]

  

The discovery of the camera obscura that provides an image of a scene dates back to ancient China. Leonardo da Vinci mentions natural cameras obscura that are formed by dark caves on the edge of a sunlit valley. A hole in the cave wall will act as a pinhole camera and project a laterally reversed, upside down image on a piece of paper. So the birth of photography was primarily concerned with developing a means to fix and retain the image produced by the camera obscura.

  

The first success of reproducing images without a camera occurred when Thomas Wedgwood, from the famous family of potters, obtained copies of paintings on leather using silver salts. Since he had no way of permanently fixing those reproductions (stabilizing the image by washing out the non-exposed silver salts), they would turn completely black in the light and thus had to be kept in a dark room for viewing.

  

Renaissance painters used the camera obscura which, in fact, gives the optical rendering in color that dominates Western Art. The camera obscura literally means "dark chamber" in Latin. It is a box with a hole in it which allows light to go through and create an image onto the piece of paper.

  

First camera photography (1820s)

Invented in the early decades of the 19th century, photography by means of the camera seemed able to capture more detail and information than traditional media, such as painting and sculpture.[19] Photography as a usable process goes back to the 1820s with the development of chemical photography. The first permanent photoetching was an image produced in 1822 by the French inventor Nicéphore Niépce, but it was destroyed in a later attempt to make prints from it.[7] Niépce was successful again in 1825. He made the View from the Window at Le Gras, the earliest surviving photograph from nature (i.e., of the image of a real-world scene, as formed in a camera obscura by a lens), in 1826 or 1827.[20]

  

Because Niépce's camera photographs required an extremely long exposure (at least eight hours and probably several days), he sought to greatly improve his bitumen process or replace it with one that was more practical. Working in partnership with Louis Daguerre, he developed a somewhat more sensitive process that produced visually superior results, but it still required a few hours of exposure in the camera. Niépce died in 1833 and Daguerre then redirected the experiments toward the light-sensitive silver halides, which Niépce had abandoned many years earlier because of his inability to make the images he captured with them light-fast and permanent. Daguerre's efforts culminated in what would later be named the daguerreotype process, the essential elements of which were in place in 1837. The required exposure time was measured in minutes instead of hours. Daguerre took the earliest confirmed photograph of a person in 1838 while capturing a view of a Paris street: unlike the other pedestrian and horse-drawn traffic on the busy boulevard, which appears deserted, one man having his boots polished stood sufficiently still throughout the approximately ten-minute-long exposure to be visible. Eventually, France agreed to pay Daguerre a pension for his process in exchange for the right to present his invention to the world as the gift of France, which occurred on 19 August 1839.

Meanwhile, in Brazil, Hercules Florence had already created his own process in 1832, naming it Photographie, and an English inventor, William Fox Talbot, had created another method of making a reasonably light-fast silver process image but had kept his work secret. After reading about Daguerre's invention in January 1839, Talbot published his method and set about improving on it. At first, like other pre-daguerreotype processes, Talbot's paper-based photography typically required hours-long exposures in the camera, but in 1840 he created the calotype process, with exposures comparable to the daguerreotype. In both its original and calotype forms, Talbot's process, unlike Daguerre's, created a translucent negative which could be used to print multiple positive copies, the basis of most chemical photography up to the present day. Daguerreotypes could only be replicated by rephotographing them with a camera.[21] Talbot's famous tiny paper negative of the Oriel window in Lacock Abbey, one of a number of camera photographs he made in the summer of 1835, may be the oldest camera negative in existence.[22][23]

  

John Herschel made many contributions to the new field. He invented the cyanotype process, later familiar as the "blueprint". He was the first to use the terms "photography", "negative" and "positive". He had discovered in 1819 that sodium thiosulphate was a solvent of silver halides, and in 1839 he informed Talbot (and, indirectly, Daguerre) that it could be used to "fix" silver-halide-based photographs and make them completely light-fast. He made the first glass negative in late 1839.

  

In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper.

  

Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1884, George Eastman developed an early type of film to replace photographic plates, leading to the technology used by film cameras today.

  

In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize for Physics in 1908.

  

Black-and-white

See also: Monochrome photography

All photography was originally monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost and its "classic" photographic look. The tones and contrast between light and dark shadows define black and white photography.[24] It is important to note that some monochromatic pictures are not always pure blacks and whites, but also contain other hues depending on the process. The cyanotype process produces an image composed of blue tones. The albumen process, first used more than 150 years ago, produces brown tones.

  

Many photographers continue to produce some monochrome images, often because of the established archival permanence of well processed silver halide based materials. Some full color digital images are processed using a variety of techniques to create black and whites, and some manufacturers produce digital cameras that exclusively shoot monochrome.

  

Color

Color photography was explored beginning in the mid-19th century. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light.

  

The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by physicist James Clerk Maxwell in 1855. Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image.

  

Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s.

  

Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images.

  

The development of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability.

  

Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s.

  

Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multilayer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure.

  

Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently available color films still employ a multilayer emulsion and the same principles, most closely resembling Agfa's product.

  

Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963.

  

Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment.

  

Digital photography

Main article: Digital photography

See also: Digital camera and Digital versus film photography

In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born.

  

Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. [25] An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications.

  

Photography gained the interest of many scientists and artists from its inception. Scientists have used photography to record and study movements, such as Eadweard Muybridge's study of human and animal locomotion in 1887. Artists are equally interested by these aspects but also try to explore avenues other than the photo-mechanical representation of reality, such as the pictorialist movement.

  

Military, police, and security forces use photography for surveillance, recognition and data storage. Photography is used by amateurs to preserve memories, to capture special moments, to tell stories, to send messages, and as a source of entertainment. High speed photography allows for visualizing events that are too fast for the human eye.

  

Technical aspects

Main article: Camera

The camera is the image-forming device, and photographic film or a silicon electronic image sensor is the sensing medium. The respective recording medium can be the film itself, or a digital electronic or magnetic memory.[26]

  

Photographers control the camera and lens to "expose" the light recording material (such as film) to the required amount of light to form a "latent image" (on film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on paper or film.

  

The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera).

  

As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens.

  

The movie camera is a type of photographic camera which takes a rapid sequence of photographs on strips of film. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person's eyes and brain merge the separate pictures together to create the illusion of motion.[27]

  

Camera controls are interrelated. The total amount of light reaching the film plane (the 'exposure') changes with the duration of exposure, aperture of the lens, and on the effective focal length of the lens (which in variable focal length lenses, can force a change in aperture as the lens is zoomed). Changing any of these controls can alter the exposure. Many cameras may be set to adjust most or all of these controls automatically. This automatic functionality is useful for occasional photographers in many situations.

  

The duration of an exposure is referred to as shutter speed, often even in cameras that do not have a physical shutter, and is typically measured in fractions of a second. It is quite possible to have exposures from one up to several seconds, usually for still-life subjects, and for night scenes exposure times can be several hours. However, for a subject that is in motion use a fast shutter speed. This will prevent the photograph from coming out blurry.[29]

  

The effective aperture is expressed by an f-number or f-stop (derived from focal ratio), which is proportional to the ratio of the focal length to the diameter of the aperture. Longer lenses will pass less light even though the diameter of the aperture is the same due to the greater distance the light has to travel; shorter lenses (a shorter focal length) will be brighter with the same size of aperture.

  

The smaller the f/number, the larger the effective aperture. The present system of f/numbers to give the effective aperture of a lens was standardized by an international convention. There were earlier, different series of numbers in older cameras.

  

If the f-number is decreased by a factor of √2, the aperture diameter is increased by the same factor, and its area is increased by a factor of 2. The f-stops that might be found on a typical lens include 2.8, 4, 5.6, 8, 11, 16, 22, 32, where going up "one stop" (using lower f-stop numbers) doubles the amount of light reaching the film, and stopping down one stop halves the amount of light.

  

Image capture can be achieved through various combinations of shutter speed, aperture, and film or sensor speed. Different (but related) settings of aperture and shutter speed enable photographs to be taken under various conditions of film or sensor speed, lighting and motion of subjects and/or camera, and desired depth of field. A slower speed film will exhibit less "grain", and a slower speed setting on an electronic sensor will exhibit less "noise", while higher film and sensor speeds allow for a faster shutter speed, which reduces motion blur or allows the use of a smaller aperture to increase the depth of field.

  

For example, a wider aperture is used for lower light and a lower aperture for more light. If a subject is in motion, then a high shutter speed may be needed. A tripod can also be helpful in that it enables a slower shutter speed to be used.

  

For example, f/8 at 8 ms (1/125 of a second) and f/5.6 at 4 ms (1/250 of a second) yield the same amount of light. The chosen combination has an impact on the final result. The aperture and focal length of the lens determine the depth of field, which refers to the range of distances from the lens that will be in focus. A longer lens or a wider aperture will result in "shallow" depth of field (i.e. only a small plane of the image will be in sharp focus). This is often useful for isolating subjects from backgrounds as in individual portraits or macro photography.

  

Conversely, a shorter lens, or a smaller aperture, will result in more of the image being in focus. This is generally more desirable when photographing landscapes or groups of people. With very small apertures, such as pinholes, a wide range of distance can be brought into focus, but sharpness is severely degraded by diffraction with such small apertures. Generally, the highest degree of "sharpness" is achieved at an aperture near the middle of a lens's range (for example, f/8 for a lens with available apertures of f/2.8 to f/16). However, as lens technology improves, lenses are becoming capable of making increasingly sharp images at wider apertures.

  

Image capture is only part of the image forming process. Regardless of material, some process must be employed to render the latent image captured by the camera into a viewable image. With slide film, the developed film is just mounted for projection. Print film requires the developed film negative to be printed onto photographic paper or transparency. Digital images may be uploaded to an image server (e.g., a photo-sharing web site), viewed on a television, or transferred to a computer or digital photo frame. Every type can be printed on more "classical" mediums such as regular paper or photographic paper for examples.

  

Prior to the rendering of a viewable image, modifications can be made using several controls. Many of these controls are similar to controls during image capture, while some are exclusive to the rendering process. Most printing controls have equivalent digital concepts, but some create different effects. For example, dodging and burning controls are different between digital and film processes. Other printing modifications include:

Digital point-and-shoot cameras have become widespread consumer products, outselling film cameras, and including new features such as video and audio recording. Kodak announced in January 2004 that it would no longer sell reloadable 35 mm cameras in western Europe, Canada and the United States after the end of that year. Kodak was at that time a minor player in the reloadable film cameras market. In January 2006, Nikon followed suit and announced that they will stop the production of all but two models of their film cameras: the low-end Nikon FM10, and the high-end Nikon F6. On 25 May 2006, Canon announced they will stop developing new film SLR cameras.[34] Though most new camera designs are now digital, a new 6x6cm/6x7cm medium format film camera was introduced in 2008 in a cooperation between Fuji and Voigtländer.[35][36]

  

According to a survey made by Kodak in 2007 when the majority of photography was already digital, 75 percent of professional photographers say they will continue to use film, even though some embrace digital.[37]

  

The PMA say that in the year 2000 nearly a billion rolls of film were sold each year and by 2011 a mere 20 million rolls, plus 31 million single-use cameras.[38]

  

Quelle:

en.wikipedia.org/wiki/Photography

de.wikipedia.org/wiki/Fotografie

   

High-dynamic-range imaging (HDRI or HDR) is a set of techniques used in imaging and photography to reproduce a greater dynamic range of luminosity than possible using standard digital imaging or photographic techniques. HDR images can represent more accurately the range of intensity levels found in real scenes, from direct sunlight to faint starlight, and is often captured by way of a plurality of differently exposed pictures of the same subject matter.[1][2][3][4]

 

Non-HDR cameras take photographs with a limited exposure range, resulting in the loss of detail in bright or dark areas. HDR compensates for this loss of detail by capturing multiple photographs at different exposure levels and combining them to produce a photograph representative of a broader tonal range.

 

The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR)[5] or standard-dynamic-range (SDR)[6] photographs. HDR images can also be acquired using special image sensors, like oversampled binary image sensor. Tone mapping methods, which reduce overall contrast to facilitate display of HDR images on devices with lower dynamic range, can be applied to produce images with preserved or exaggerated local contrast for artistic effect.

In photography, dynamic range is measured in EV differences (known as stops) between the brightest and darkest parts of the image that show detail. An increase of one EV or one stop is a doubling of the amount of light. Compare that, for example, 210=1024:

High-dynamic-range photographs are generally achieved by capturing multiple standard photographs, often using exposure bracketing, and then merging them into an HDR image. Digital photographs are often encoded in a camera's raw image format, because 8 bit JPEG encoding doesn't offer enough values to allow fine transitions (and introduces undesirable effects due to the lossy compression).

 

The images from any camera that allows manual exposure control can be used to create HDR images. This includes film cameras, though the images may need to be digitized so they can be processed with software HDR methods.

 

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II.[10] As the popularity of this imaging method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7 DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file.[11] The Canon PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.[12] Even some smartphones now include HDR modes, and most platforms have apps that provide HDR picture taking.[13]

 

Color film negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range.[14]

 

Camera characteristics

Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and spectral calibration affect resulting high-dynamic-range images.[15][15]

 

Tone mapping

Main article: Tone mapping

Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast.

 

Software

Several software applications are available on the PC, Mac and Linux platforms for producing HDR files and tone mapped images. Notable titles include

Adobe Photoshop

Dynamic Photo HDR

HDR PhotoStudio

Luminance HDR

Oloneo PhotoEngine

Photomatix Pro

PTGui

Comparison with traditional digital images

Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors that should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.[16][17][18]

 

HDR images often don't use fixed ranges per color channel—other than for traditional images—to represent many more colors over a much wider dynamic range. For that purpose, they don't use integer values to represent the single color channels (e.g.m, 0..255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.

The idea of using several exposures to fix a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, the luminosity range being too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.[20]

 

Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This is effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.[22]

 

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which features dodging and burning prominently, in the context of his Zone System.

 

With the advent of color photography, tone mapping in the darkroom was no longer possible, due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response over the years, or shot in black and white to use tone mapping methods.

Film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force".[23] This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color.[24] The dynamic range of this extended range film has been estimated as 1:108.[25] It has been used to photograph nuclear explosions,[26] for astronomical photography,[27] for spectrographic research,[28] and for medical imaging.[29] Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.

 

Late-twentieth century[edit]

The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.[30] In 1993 the first commercial medical camera was introduced that performed real time capturing of multiple images with different exposures, and producing an HDR video image, by the same group.[31]

 

Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping this result. Global HDR was first introduced in 1993[1] resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.[2]

 

The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory.[32] Mann's method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.[32]

 

In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping.

 

While custom high-dynamic-range digital video solutions had been developed for industrial manufacturing during the 1980s, it was not until the early 2000s that several scholarly research efforts used consumer-grade sensors and cameras.[34] A few companies such as RED[35] and Arri[36] have been developing digital sensors capable of a higher dynamic range. RED EPIC-X can capture HDRx images with a user selectable 1-3 stops of additional highlight latitude in the 'x' channel. The 'x' channel can be merged with the normal channel in post production software. With the advent of low-cost consumer digital cameras, many amateurs began posting tone mapped HDR time-lapse videos on the Internet, essentially a sequence of still photographs in quick succession. In 2010 the independent studio Soviet Montage produced an example of HDR video from disparately exposed video streams using a beam splitter and consumer grade HD video cameras.[37] Similar methods have been described in the academic literature in 2001[38] and 2007.[39]

 

Modern movies have often been filmed with cameras featuring a higher dynamic range, and legacy movies can be upgraded even if manual intervention would be needed for some frames (as this happened in the past with black&white films’ upgrade to color). Also, special effects, especially those in which real and synthetic footage are seamlessly mixed, require both HDR shooting and rendering. HDR video is also needed in all applications in which capturing temporal aspects of changes in the scene demands high accuracy. This is especially important in monitoring of some industrial processes such as welding, predictive driver assistance systems in automotive industry, surveillance systems, to name just a few possible applications. HDR video can be also considered to speed up the image acquisition in all applications, in which a large number of static HDR images are needed, as for example in image-based methods in computer graphics. Finally, with the spread of TV sets with enhanced dynamic range, broadcasting HDR video may become important, but may take a long time to occur due to standardization issues. For this particular application, enhancing current low-dynamic range rendering (LDR) video signal to HDR by intelligent TV sets seems to be a more viable near-term solution.

 

More and more CMOS image sensors now have high dynamic range capability within the pixels themselves. Such pixels are intrinsically non-linear (by design) so that the wide dynamic range of the scene is non-linearly compressed into a smaller dynamic range electronic representation inside the pixel.[41] Such sensors are used in extreme dynamic range applications like welding or automotive.

 

Some other sensors designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. Some of the sensor may even combine the two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.

 

Quelle:

 

en.wikipedia.org/wiki/High-dynamic-range_imaging

 

de.wikipedia.org/wiki/High_Dynamic_Range_Image

  

Photography (see section below for etymology) is the art, science and practice of creating durable images by recording light or other electromagnetic radiation, either chemically by means of a light-sensitive material such as photographic film, or electronically by means of an image sensor.[1] Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. The result in an electronic image sensor is an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing.

 

The result in a photographic emulsion is an invisible latent image, which is later chemically developed into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.

 

Photography has many uses for business, science, manufacturing (e.g. photolithography), art, recreational purposes, and mass communication.

 

The word "photography" was created from the Greek roots φωτός (phōtos), genitive of φῶς (phōs), "light"[2] and γραφή (graphé) "representation by means of lines" or "drawing",[3] together meaning "drawing with light".[4]

 

Several people may have coined the same new term from these roots independently. Hercules Florence, a French painter and inventor living in Campinas, Brazil, used the French form of the word, photographie, in private notes which a Brazilian photography historian believes were written in 1834.[5] Johann von Maedler, a Berlin astronomer, is credited in a 1932 German history of photography as having used it in an article published on 25 February 1839 in the German newspaper Vossische Zeitung.[6] Both of these claims are now widely reported but apparently neither has ever been independently confirmed as beyond reasonable doubt. Credit has traditionally been given to Sir John Herschel both for coining the word and for introducing it to the public. His uses of it in private correspondence prior to 25 February 1839 and at his Royal Society lecture on the subject in London on 14 March 1839 have long been amply documented and accepted as settled fact.

 

History and evolution

Precursor technologies

Photography is the result of combining several technical discoveries. Long before the first photographs were made, Chinese philosopher Mo Di and Greek mathematicians Aristotle and Euclid described a pinhole camera in the 5th and 4th centuries BCE.[8][9] In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments,[10] Ibn al-Haytham (Alhazen) (965–1040) studied the camera obscura and pinhole camera,[9][11] Albertus Magnus (1193–1280) discovered silver nitrate,[12] and Georg Fabricius (1516–71) discovered silver chloride.[13] Techniques described in the Book of Optics are capable of producing primitive photographs using medieval materials. [14][15][16]

 

Daniele Barbaro described a diaphragm in 1566.[17] Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694.[18] The fiction book Giphantie, published in 1760, by French author Tiphaigne de la Roche, described what can be interpreted as photography.[17]

 

The discovery of the camera obscura that provides an image of a scene dates back to ancient China. Leonardo da Vinci mentions natural cameras obscura that are formed by dark caves on the edge of a sunlit valley. A hole in the cave wall will act as a pinhole camera and project a laterally reversed, upside down image on a piece of paper. So the birth of photography was primarily concerned with developing a means to fix and retain the image produced by the camera obscura.

 

The first success of reproducing images without a camera occurred when Thomas Wedgwood, from the famous family of potters, obtained copies of paintings on leather using silver salts. Since he had no way of permanently fixing those reproductions (stabilizing the image by washing out the non-exposed silver salts), they would turn completely black in the light and thus had to be kept in a dark room for viewing.

 

Renaissance painters used the camera obscura which, in fact, gives the optical rendering in color that dominates Western Art. The camera obscura literally means "dark chamber" in Latin. It is a box with a hole in it which allows light to go through and create an image onto the piece of paper.

 

First camera photography (1820s)

Invented in the early decades of the 19th century, photography by means of the camera seemed able to capture more detail and information than traditional media, such as painting and sculpture.[19] Photography as a usable process goes back to the 1820s with the development of chemical photography. The first permanent photoetching was an image produced in 1822 by the French inventor Nicéphore Niépce, but it was destroyed in a later attempt to make prints from it.[7] Niépce was successful again in 1825. He made the View from the Window at Le Gras, the earliest surviving photograph from nature (i.e., of the image of a real-world scene, as formed in a camera obscura by a lens), in 1826 or 1827.[20]

 

Because Niépce's camera photographs required an extremely long exposure (at least eight hours and probably several days), he sought to greatly improve his bitumen process or replace it with one that was more practical. Working in partnership with Louis Daguerre, he developed a somewhat more sensitive process that produced visually superior results, but it still required a few hours of exposure in the camera. Niépce died in 1833 and Daguerre then redirected the experiments toward the light-sensitive silver halides, which Niépce had abandoned many years earlier because of his inability to make the images he captured with them light-fast and permanent. Daguerre's efforts culminated in what would later be named the daguerreotype process, the essential elements of which were in place in 1837. The required exposure time was measured in minutes instead of hours. Daguerre took the earliest confirmed photograph of a person in 1838 while capturing a view of a Paris street: unlike the other pedestrian and horse-drawn traffic on the busy boulevard, which appears deserted, one man having his boots polished stood sufficiently still throughout the approximately ten-minute-long exposure to be visible. Eventually, France agreed to pay Daguerre a pension for his process in exchange for the right to present his invention to the world as the gift of France, which occurred on 19 August 1839.

Meanwhile, in Brazil, Hercules Florence had already created his own process in 1832, naming it Photographie, and an English inventor, William Fox Talbot, had created another method of making a reasonably light-fast silver process image but had kept his work secret. After reading about Daguerre's invention in January 1839, Talbot published his method and set about improving on it. At first, like other pre-daguerreotype processes, Talbot's paper-based photography typically required hours-long exposures in the camera, but in 1840 he created the calotype process, with exposures comparable to the daguerreotype. In both its original and calotype forms, Talbot's process, unlike Daguerre's, created a translucent negative which could be used to print multiple positive copies, the basis of most chemical photography up to the present day. Daguerreotypes could only be replicated by rephotographing them with a camera.[21] Talbot's famous tiny paper negative of the Oriel window in Lacock Abbey, one of a number of camera photographs he made in the summer of 1835, may be the oldest camera negative in existence.[22][23]

 

John Herschel made many contributions to the new field. He invented the cyanotype process, later familiar as the "blueprint". He was the first to use the terms "photography", "negative" and "positive". He had discovered in 1819 that sodium thiosulphate was a solvent of silver halides, and in 1839 he informed Talbot (and, indirectly, Daguerre) that it could be used to "fix" silver-halide-based photographs and make them completely light-fast. He made the first glass negative in late 1839.

 

In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper.

 

Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1884, George Eastman developed an early type of film to replace photographic plates, leading to the technology used by film cameras today.

 

In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize for Physics in 1908.

 

Black-and-white

See also: Monochrome photography

All photography was originally monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost and its "classic" photographic look. The tones and contrast between light and dark shadows define black and white photography.[24] It is important to note that some monochromatic pictures are not always pure blacks and whites, but also contain other hues depending on the process. The cyanotype process produces an image composed of blue tones. The albumen process, first used more than 150 years ago, produces brown tones.

 

Many photographers continue to produce some monochrome images, often because of the established archival permanence of well processed silver halide based materials. Some full color digital images are processed using a variety of techniques to create black and whites, and some manufacturers produce digital cameras that exclusively shoot monochrome.

 

Color

Color photography was explored beginning in the mid-19th century. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light.

 

The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by physicist James Clerk Maxwell in 1855. Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image.

 

Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s.

 

Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images.

 

The development of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability.

 

Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s.

 

Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multilayer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure.

 

Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently available color films still employ a multilayer emulsion and the same principles, most closely resembling Agfa's product.

 

Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963.

 

Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment.

 

Digital photography

Main article: Digital photography

See also: Digital camera and Digital versus film photography

In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born.

 

Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. [25] An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications.

  

Photography gained the interest of many scientists and artists from its inception. Scientists have used photography to record and study movements, such as Eadweard Muybridge's study of human and animal locomotion in 1887. Artists are equally interested by these aspects but also try to explore avenues other than the photo-mechanical representation of reality, such as the pictorialist movement.

 

Military, police, and security forces use photography for surveillance, recognition and data storage. Photography is used by amateurs to preserve memories, to capture special moments, to tell stories, to send messages, and as a source of entertainment. High speed photography allows for visualizing events that are too fast for the human eye.

 

Technical aspects

Main article: Camera

The camera is the image-forming device, and photographic film or a silicon electronic image sensor is the sensing medium. The respective recording medium can be the film itself, or a digital electronic or magnetic memory.[26]

 

Photographers control the camera and lens to "expose" the light recording material (such as film) to the required amount of light to form a "latent image" (on film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on paper or film.

 

The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera).

 

As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens.

 

The movie camera is a type of photographic camera which takes a rapid sequence of photographs on strips of film. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person's eyes and brain merge the separate pictures together to create the illusion of motion.[27]

 

Camera controls are interrelated. The total amount of light reaching the film plane (the 'exposure') changes with the duration of exposure, aperture of the lens, and on the effective focal length of the lens (which in variable focal length lenses, can force a change in aperture as the lens is zoomed). Changing any of these controls can alter the exposure. Many cameras may be set to adjust most or all of these controls automatically. This automatic functionality is useful for occasional photographers in many situations.

 

The duration of an exposure is referred to as shutter speed, often even in cameras that do not have a physical shutter, and is typically measured in fractions of a second. It is quite possible to have exposures from one up to several seconds, usually for still-life subjects, and for night scenes exposure times can be several hours. However, for a subject that is in motion use a fast shutter speed. This will prevent the photograph from coming out blurry.[29]

 

The effective aperture is expressed by an f-number or f-stop (derived from focal ratio), which is proportional to the ratio of the focal length to the diameter of the aperture. Longer lenses will pass less light even though the diameter of the aperture is the same due to the greater distance the light has to travel; shorter lenses (a shorter focal length) will be brighter with the same size of aperture.

 

The smaller the f/number, the larger the effective aperture. The present system of f/numbers to give the effective aperture of a lens was standardized by an international convention. There were earlier, different series of numbers in older cameras.

 

If the f-number is decreased by a factor of √2, the aperture diameter is increased by the same factor, and its area is increased by a factor of 2. The f-stops that might be found on a typical lens include 2.8, 4, 5.6, 8, 11, 16, 22, 32, where going up "one stop" (using lower f-stop numbers) doubles the amount of light reaching the film, and stopping down one stop halves the amount of light.

 

Image capture can be achieved through various combinations of shutter speed, aperture, and film or sensor speed. Different (but related) settings of aperture and shutter speed enable photographs to be taken under various conditions of film or sensor speed, lighting and motion of subjects and/or camera, and desired depth of field. A slower speed film will exhibit less "grain", and a slower speed setting on an electronic sensor will exhibit less "noise", while higher film and sensor speeds allow for a faster shutter speed, which reduces motion blur or allows the use of a smaller aperture to increase the depth of field.

 

For example, a wider aperture is used for lower light and a lower aperture for more light. If a subject is in motion, then a high shutter speed may be needed. A tripod can also be helpful in that it enables a slower shutter speed to be used.

 

For example, f/8 at 8 ms (1/125 of a second) and f/5.6 at 4 ms (1/250 of a second) yield the same amount of light. The chosen combination has an impact on the final result. The aperture and focal length of the lens determine the depth of field, which refers to the range of distances from the lens that will be in focus. A longer lens or a wider aperture will result in "shallow" depth of field (i.e. only a small plane of the image will be in sharp focus). This is often useful for isolating subjects from backgrounds as in individual portraits or macro photography.

 

Conversely, a shorter lens, or a smaller aperture, will result in more of the image being in focus. This is generally more desirable when photographing landscapes or groups of people. With very small apertures, such as pinholes, a wide range of distance can be brought into focus, but sharpness is severely degraded by diffraction with such small apertures. Generally, the highest degree of "sharpness" is achieved at an aperture near the middle of a lens's range (for example, f/8 for a lens with available apertures of f/2.8 to f/16). However, as lens technology improves, lenses are becoming capable of making increasingly sharp images at wider apertures.

 

Image capture is only part of the image forming process. Regardless of material, some process must be employed to render the latent image captured by the camera into a viewable image. With slide film, the developed film is just mounted for projection. Print film requires the developed film negative to be printed onto photographic paper or transparency. Digital images may be uploaded to an image server (e.g., a photo-sharing web site), viewed on a television, or transferred to a computer or digital photo frame. Every type can be printed on more "classical" mediums such as regular paper or photographic paper for examples.

 

Prior to the rendering of a viewable image, modifications can be made using several controls. Many of these controls are similar to controls during image capture, while some are exclusive to the rendering process. Most printing controls have equivalent digital concepts, but some create different effects. For example, dodging and burning controls are different between digital and film processes. Other printing modifications include:

Digital point-and-shoot cameras have become widespread consumer products, outselling film cameras, and including new features such as video and audio recording. Kodak announced in January 2004 that it would no longer sell reloadable 35 mm cameras in western Europe, Canada and the United States after the end of that year. Kodak was at that time a minor player in the reloadable film cameras market. In January 2006, Nikon followed suit and announced that they will stop the production of all but two models of their film cameras: the low-end Nikon FM10, and the high-end Nikon F6. On 25 May 2006, Canon announced they will stop developing new film SLR cameras.[34] Though most new camera designs are now digital, a new 6x6cm/6x7cm medium format film camera was introduced in 2008 in a cooperation between Fuji and Voigtländer.[35][36]

 

According to a survey made by Kodak in 2007 when the majority of photography was already digital, 75 percent of professional photographers say they will continue to use film, even though some embrace digital.[37]

 

The PMA say that in the year 2000 nearly a billion rolls of film were sold each year and by 2011 a mere 20 million rolls, plus 31 million single-use cameras.[38]

 

Quelle:

en.wikipedia.org/wiki/Photography

de.wikipedia.org/wiki/Fotografie

 

High-dynamic-range imaging (HDRI or HDR) is a set of techniques used in imaging and photography to reproduce a greater dynamic range of luminosity than possible using standard digital imaging or photographic techniques. HDR images can represent more accurately the range of intensity levels found in real scenes, from direct sunlight to faint starlight, and is often captured by way of a plurality of differently exposed pictures of the same subject matter.[1][2][3][4]

 

Non-HDR cameras take photographs with a limited exposure range, resulting in the loss of detail in bright or dark areas. HDR compensates for this loss of detail by capturing multiple photographs at different exposure levels and combining them to produce a photograph representative of a broader tonal range.

 

The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR)[5] or standard-dynamic-range (SDR)[6] photographs. HDR images can also be acquired using special image sensors, like oversampled binary image sensor. Tone mapping methods, which reduce overall contrast to facilitate display of HDR images on devices with lower dynamic range, can be applied to produce images with preserved or exaggerated local contrast for artistic effect.

In photography, dynamic range is measured in EV differences (known as stops) between the brightest and darkest parts of the image that show detail. An increase of one EV or one stop is a doubling of the amount of light. Compare that, for example, 210=1024:

High-dynamic-range photographs are generally achieved by capturing multiple standard photographs, often using exposure bracketing, and then merging them into an HDR image. Digital photographs are often encoded in a camera's raw image format, because 8 bit JPEG encoding doesn't offer enough values to allow fine transitions (and introduces undesirable effects due to the lossy compression).

 

The images from any camera that allows manual exposure control can be used to create HDR images. This includes film cameras, though the images may need to be digitized so they can be processed with software HDR methods.

 

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II.[10] As the popularity of this imaging method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7 DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file.[11] The Canon PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.[12] Even some smartphones now include HDR modes, and most platforms have apps that provide HDR picture taking.[13]

 

Color film negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range.[14]

 

Camera characteristics

Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and spectral calibration affect resulting high-dynamic-range images.[15][15]

 

Tone mapping

Main article: Tone mapping

Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast.

 

Software

Several software applications are available on the PC, Mac and Linux platforms for producing HDR files and tone mapped images. Notable titles include

Adobe Photoshop

Dynamic Photo HDR

HDR PhotoStudio

Luminance HDR

Oloneo PhotoEngine

Photomatix Pro

PTGui

Comparison with traditional digital images

Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors that should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.[16][17][18]

 

HDR images often don't use fixed ranges per color channel—other than for traditional images—to represent many more colors over a much wider dynamic range. For that purpose, they don't use integer values to represent the single color channels (e.g.m, 0..255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.

The idea of using several exposures to fix a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, the luminosity range being too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.[20]

 

Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This is effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.[22]

 

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which features dodging and burning prominently, in the context of his Zone System.

 

With the advent of color photography, tone mapping in the darkroom was no longer possible, due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response over the years, or shot in black and white to use tone mapping methods.

Film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force".[23] This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color.[24] The dynamic range of this extended range film has been estimated as 1:108.[25] It has been used to photograph nuclear explosions,[26] for astronomical photography,[27] for spectrographic research,[28] and for medical imaging.[29] Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.

 

Late-twentieth century[edit]

The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.[30] In 1993 the first commercial medical camera was introduced that performed real time capturing of multiple images with different exposures, and producing an HDR video image, by the same group.[31]

 

Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping this result. Global HDR was first introduced in 1993[1] resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.[2]

 

The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory.[32] Mann's method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.[32]

 

In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping.

 

While custom high-dynamic-range digital video solutions had been developed for industrial manufacturing during the 1980s, it was not until the early 2000s that several scholarly research efforts used consumer-grade sensors and cameras.[34] A few companies such as RED[35] and Arri[36] have been developing digital sensors capable of a higher dynamic range. RED EPIC-X can capture HDRx images with a user selectable 1-3 stops of additional highlight latitude in the 'x' channel. The 'x' channel can be merged with the normal channel in post production software. With the advent of low-cost consumer digital cameras, many amateurs began posting tone mapped HDR time-lapse videos on the Internet, essentially a sequence of still photographs in quick succession. In 2010 the independent studio Soviet Montage produced an example of HDR video from disparately exposed video streams using a beam splitter and consumer grade HD video cameras.[37] Similar methods have been described in the academic literature in 2001[38] and 2007.[39]

 

Modern movies have often been filmed with cameras featuring a higher dynamic range, and legacy movies can be upgraded even if manual intervention would be needed for some frames (as this happened in the past with black&white films’ upgrade to color). Also, special effects, especially those in which real and synthetic footage are seamlessly mixed, require both HDR shooting and rendering. HDR video is also needed in all applications in which capturing temporal aspects of changes in the scene demands high accuracy. This is especially important in monitoring of some industrial processes such as welding, predictive driver assistance systems in automotive industry, surveillance systems, to name just a few possible applications. HDR video can be also considered to speed up the image acquisition in all applications, in which a large number of static HDR images are needed, as for example in image-based methods in computer graphics. Finally, with the spread of TV sets with enhanced dynamic range, broadcasting HDR video may become important, but may take a long time to occur due to standardization issues. For this particular application, enhancing current low-dynamic range rendering (LDR) video signal to HDR by intelligent TV sets seems to be a more viable near-term solution.

 

More and more CMOS image sensors now have high dynamic range capability within the pixels themselves. Such pixels are intrinsically non-linear (by design) so that the wide dynamic range of the scene is non-linearly compressed into a smaller dynamic range electronic representation inside the pixel.[41] Such sensors are used in extreme dynamic range applications like welding or automotive.

 

Some other sensors designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. Some of the sensor may even combine the two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.

 

Quelle:

 

en.wikipedia.org/wiki/High-dynamic-range_imaging

 

de.wikipedia.org/wiki/High_Dynamic_Range_Image

  

Photography (see section below for etymology) is the art, science and practice of creating durable images by recording light or other electromagnetic radiation, either chemically by means of a light-sensitive material such as photographic film, or electronically by means of an image sensor.[1] Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. The result in an electronic image sensor is an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing.

 

The result in a photographic emulsion is an invisible latent image, which is later chemically developed into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.

 

Photography has many uses for business, science, manufacturing (e.g. photolithography), art, recreational purposes, and mass communication.

 

The word "photography" was created from the Greek roots φωτός (phōtos), genitive of φῶς (phōs), "light"[2] and γραφή (graphé) "representation by means of lines" or "drawing",[3] together meaning "drawing with light".[4]

 

Several people may have coined the same new term from these roots independently. Hercules Florence, a French painter and inventor living in Campinas, Brazil, used the French form of the word, photographie, in private notes which a Brazilian photography historian believes were written in 1834.[5] Johann von Maedler, a Berlin astronomer, is credited in a 1932 German history of photography as having used it in an article published on 25 February 1839 in the German newspaper Vossische Zeitung.[6] Both of these claims are now widely reported but apparently neither has ever been independently confirmed as beyond reasonable doubt. Credit has traditionally been given to Sir John Herschel both for coining the word and for introducing it to the public. His uses of it in private correspondence prior to 25 February 1839 and at his Royal Society lecture on the subject in London on 14 March 1839 have long been amply documented and accepted as settled fact.

 

History and evolution

Precursor technologies

Photography is the result of combining several technical discoveries. Long before the first photographs were made, Chinese philosopher Mo Di and Greek mathematicians Aristotle and Euclid described a pinhole camera in the 5th and 4th centuries BCE.[8][9] In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments,[10] Ibn al-Haytham (Alhazen) (965–1040) studied the camera obscura and pinhole camera,[9][11] Albertus Magnus (1193–1280) discovered silver nitrate,[12] and Georg Fabricius (1516–71) discovered silver chloride.[13] Techniques described in the Book of Optics are capable of producing primitive photographs using medieval materials. [14][15][16]

 

Daniele Barbaro described a diaphragm in 1566.[17] Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694.[18] The fiction book Giphantie, published in 1760, by French author Tiphaigne de la Roche, described what can be interpreted as photography.[17]

 

The discovery of the camera obscura that provides an image of a scene dates back to ancient China. Leonardo da Vinci mentions natural cameras obscura that are formed by dark caves on the edge of a sunlit valley. A hole in the cave wall will act as a pinhole camera and project a laterally reversed, upside down image on a piece of paper. So the birth of photography was primarily concerned with developing a means to fix and retain the image produced by the camera obscura.

 

The first success of reproducing images without a camera occurred when Thomas Wedgwood, from the famous family of potters, obtained copies of paintings on leather using silver salts. Since he had no way of permanently fixing those reproductions (stabilizing the image by washing out the non-exposed silver salts), they would turn completely black in the light and thus had to be kept in a dark room for viewing.

 

Renaissance painters used the camera obscura which, in fact, gives the optical rendering in color that dominates Western Art. The camera obscura literally means "dark chamber" in Latin. It is a box with a hole in it which allows light to go through and create an image onto the piece of paper.

 

First camera photography (1820s)

Invented in the early decades of the 19th century, photography by means of the camera seemed able to capture more detail and information than traditional media, such as painting and sculpture.[19] Photography as a usable process goes back to the 1820s with the development of chemical photography. The first permanent photoetching was an image produced in 1822 by the French inventor Nicéphore Niépce, but it was destroyed in a later attempt to make prints from it.[7] Niépce was successful again in 1825. He made the View from the Window at Le Gras, the earliest surviving photograph from nature (i.e., of the image of a real-world scene, as formed in a camera obscura by a lens), in 1826 or 1827.[20]

 

Because Niépce's camera photographs required an extremely long exposure (at least eight hours and probably several days), he sought to greatly improve his bitumen process or replace it with one that was more practical. Working in partnership with Louis Daguerre, he developed a somewhat more sensitive process that produced visually superior results, but it still required a few hours of exposure in the camera. Niépce died in 1833 and Daguerre then redirected the experiments toward the light-sensitive silver halides, which Niépce had abandoned many years earlier because of his inability to make the images he captured with them light-fast and permanent. Daguerre's efforts culminated in what would later be named the daguerreotype process, the essential elements of which were in place in 1837. The required exposure time was measured in minutes instead of hours. Daguerre took the earliest confirmed photograph of a person in 1838 while capturing a view of a Paris street: unlike the other pedestrian and horse-drawn traffic on the busy boulevard, which appears deserted, one man having his boots polished stood sufficiently still throughout the approximately ten-minute-long exposure to be visible. Eventually, France agreed to pay Daguerre a pension for his process in exchange for the right to present his invention to the world as the gift of France, which occurred on 19 August 1839.

Meanwhile, in Brazil, Hercules Florence had already created his own process in 1832, naming it Photographie, and an English inventor, William Fox Talbot, had created another method of making a reasonably light-fast silver process image but had kept his work secret. After reading about Daguerre's invention in January 1839, Talbot published his method and set about improving on it. At first, like other pre-daguerreotype processes, Talbot's paper-based photography typically required hours-long exposures in the camera, but in 1840 he created the calotype process, with exposures comparable to the daguerreotype. In both its original and calotype forms, Talbot's process, unlike Daguerre's, created a translucent negative which could be used to print multiple positive copies, the basis of most chemical photography up to the present day. Daguerreotypes could only be replicated by rephotographing them with a camera.[21] Talbot's famous tiny paper negative of the Oriel window in Lacock Abbey, one of a number of camera photographs he made in the summer of 1835, may be the oldest camera negative in existence.[22][23]

 

John Herschel made many contributions to the new field. He invented the cyanotype process, later familiar as the "blueprint". He was the first to use the terms "photography", "negative" and "positive". He had discovered in 1819 that sodium thiosulphate was a solvent of silver halides, and in 1839 he informed Talbot (and, indirectly, Daguerre) that it could be used to "fix" silver-halide-based photographs and make them completely light-fast. He made the first glass negative in late 1839.

 

In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper.

 

Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1884, George Eastman developed an early type of film to replace photographic plates, leading to the technology used by film cameras today.

 

In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize for Physics in 1908.

 

Black-and-white

See also: Monochrome photography

All photography was originally monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost and its "classic" photographic look. The tones and contrast between light and dark shadows define black and white photography.[24] It is important to note that some monochromatic pictures are not always pure blacks and whites, but also contain other hues depending on the process. The cyanotype process produces an image composed of blue tones. The albumen process, first used more than 150 years ago, produces brown tones.

 

Many photographers continue to produce some monochrome images, often because of the established archival permanence of well processed silver halide based materials. Some full color digital images are processed using a variety of techniques to create black and whites, and some manufacturers produce digital cameras that exclusively shoot monochrome.

 

Color

Color photography was explored beginning in the mid-19th century. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light.

 

The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by physicist James Clerk Maxwell in 1855. Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image.

 

Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s.

 

Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images.

 

The development of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability.

 

Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s.

 

Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multilayer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure.

 

Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently available color films still employ a multilayer emulsion and the same principles, most closely resembling Agfa's product.

 

Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963.

 

Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment.

 

Digital photography

Main article: Digital photography

See also: Digital camera and Digital versus film photography

In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born.

 

Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. [25] An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications.

  

Photography gained the interest of many scientists and artists from its inception. Scientists have used photography to record and study movements, such as Eadweard Muybridge's study of human and animal locomotion in 1887. Artists are equally interested by these aspects but also try to explore avenues other than the photo-mechanical representation of reality, such as the pictorialist movement.

 

Military, police, and security forces use photography for surveillance, recognition and data storage. Photography is used by amateurs to preserve memories, to capture special moments, to tell stories, to send messages, and as a source of entertainment. High speed photography allows for visualizing events that are too fast for the human eye.

 

Technical aspects

Main article: Camera

The camera is the image-forming device, and photographic film or a silicon electronic image sensor is the sensing medium. The respective recording medium can be the film itself, or a digital electronic or magnetic memory.[26]

 

Photographers control the camera and lens to "expose" the light recording material (such as film) to the required amount of light to form a "latent image" (on film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on paper or film.

 

The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera).

 

As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens.

 

The movie camera is a type of photographic camera which takes a rapid sequence of photographs on strips of film. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person's eyes and brain merge the separate pictures together to create the illusion of motion.[27]

 

Camera controls are interrelated. The total amount of light reaching the film plane (the 'exposure') changes with the duration of exposure, aperture of the lens, and on the effective focal length of the lens (which in variable focal length lenses, can force a change in aperture as the lens is zoomed). Changing any of these controls can alter the exposure. Many cameras may be set to adjust most or all of these controls automatically. This automatic functionality is useful for occasional photographers in many situations.

 

The duration of an exposure is referred to as shutter speed, often even in cameras that do not have a physical shutter, and is typically measured in fractions of a second. It is quite possible to have exposures from one up to several seconds, usually for still-life subjects, and for night scenes exposure times can be several hours. However, for a subject that is in motion use a fast shutter speed. This will prevent the photograph from coming out blurry.[29]

 

The effective aperture is expressed by an f-number or f-stop (derived from focal ratio), which is proportional to the ratio of the focal length to the diameter of the aperture. Longer lenses will pass less light even though the diameter of the aperture is the same due to the greater distance the light has to travel; shorter lenses (a shorter focal length) will be brighter with the same size of aperture.

 

The smaller the f/number, the larger the effective aperture. The present system of f/numbers to give the effective aperture of a lens was standardized by an international convention. There were earlier, different series of numbers in older cameras.

 

If the f-number is decreased by a factor of √2, the aperture diameter is increased by the same factor, and its area is increased by a factor of 2. The f-stops that might be found on a typical lens include 2.8, 4, 5.6, 8, 11, 16, 22, 32, where going up "one stop" (using lower f-stop numbers) doubles the amount of light reaching the film, and stopping down one stop halves the amount of light.

 

Image capture can be achieved through various combinations of shutter speed, aperture, and film or sensor speed. Different (but related) settings of aperture and shutter speed enable photographs to be taken under various conditions of film or sensor speed, lighting and motion of subjects and/or camera, and desired depth of field. A slower speed film will exhibit less "grain", and a slower speed setting on an electronic sensor will exhibit less "noise", while higher film and sensor speeds allow for a faster shutter speed, which reduces motion blur or allows the use of a smaller aperture to increase the depth of field.

 

For example, a wider aperture is used for lower light and a lower aperture for more light. If a subject is in motion, then a high shutter speed may be needed. A tripod can also be helpful in that it enables a slower shutter speed to be used.

 

For example, f/8 at 8 ms (1/125 of a second) and f/5.6 at 4 ms (1/250 of a second) yield the same amount of light. The chosen combination has an impact on the final result. The aperture and focal length of the lens determine the depth of field, which refers to the range of distances from the lens that will be in focus. A longer lens or a wider aperture will result in "shallow" depth of field (i.e. only a small plane of the image will be in sharp focus). This is often useful for isolating subjects from backgrounds as in individual portraits or macro photography.

 

Conversely, a shorter lens, or a smaller aperture, will result in more of the image being in focus. This is generally more desirable when photographing landscapes or groups of people. With very small apertures, such as pinholes, a wide range of distance can be brought into focus, but sharpness is severely degraded by diffraction with such small apertures. Generally, the highest degree of "sharpness" is achieved at an aperture near the middle of a lens's range (for example, f/8 for a lens with available apertures of f/2.8 to f/16). However, as lens technology improves, lenses are becoming capable of making increasingly sharp images at wider apertures.

 

Image capture is only part of the image forming process. Regardless of material, some process must be employed to render the latent image captured by the camera into a viewable image. With slide film, the developed film is just mounted for projection. Print film requires the developed film negative to be printed onto photographic paper or transparency. Digital images may be uploaded to an image server (e.g., a photo-sharing web site), viewed on a television, or transferred to a computer or digital photo frame. Every type can be printed on more "classical" mediums such as regular paper or photographic paper for examples.

 

Prior to the rendering of a viewable image, modifications can be made using several controls. Many of these controls are similar to controls during image capture, while some are exclusive to the rendering process. Most printing controls have equivalent digital concepts, but some create different effects. For example, dodging and burning controls are different between digital and film processes. Other printing modifications include:

Digital point-and-shoot cameras have become widespread consumer products, outselling film cameras, and including new features such as video and audio recording. Kodak announced in January 2004 that it would no longer sell reloadable 35 mm cameras in western Europe, Canada and the United States after the end of that year. Kodak was at that time a minor player in the reloadable film cameras market. In January 2006, Nikon followed suit and announced that they will stop the production of all but two models of their film cameras: the low-end Nikon FM10, and the high-end Nikon F6. On 25 May 2006, Canon announced they will stop developing new film SLR cameras.[34] Though most new camera designs are now digital, a new 6x6cm/6x7cm medium format film camera was introduced in 2008 in a cooperation between Fuji and Voigtländer.[35][36]

 

According to a survey made by Kodak in 2007 when the majority of photography was already digital, 75 percent of professional photographers say they will continue to use film, even though some embrace digital.[37]

 

The PMA say that in the year 2000 nearly a billion rolls of film were sold each year and by 2011 a mere 20 million rolls, plus 31 million single-use cameras.[38]

 

Quelle:

en.wikipedia.org/wiki/Photography

de.wikipedia.org/wiki/Fotografie

 

My gifts to all my flickr friends,.

Best free antivirus sofrware from microsoft.

windows.microsoft.com/en-US/windows/products/security-ess...

 

Best free music from Rolling Stone magazine.

www.spotify.com/us/start/?utm_source=spotify&utm_medi...

Best free computer cleaner. From yahoo.

downloads.yahoo.com/software/windows-web-tools-ccleaner-s...

I use all of these and the only problem I have had is running spotify and listening to music while surfing flickr. The two together use to much bandwidth.

Anyway, this is the best I can do from here. Merry Christmas and have a great New Year.

This is my first homemade Christmas card

 

youtu.be/Chu9AorVuUU

 

youtu.be/uH8FvERQHtM

Schlafen unterm Staffelberg

 

Der niederbayerische Staffelberg (793 m) ist eine markante, pyramidenförmige Erhebung im südlichen Bayerischen Wald direkt über der Stadt Hauzenberg, welche zum Landkreis Passau gehört.

 

Auf dem dicht bewaldeten Gipfel errichteten die Hauzenberger einen kleinen Aussichtsturm, der einen guten Ausblick auf die Stadt sowie auf den Freudensee bietet. Daneben befindet sich ein Gipfelkreuz sowie mehrere Rastbänke. Zum Staffelberg führen drei markierte Wege, die allesamt recht steil angelegt sind: vom Freudensee, von Germannsdorf und von Hauzenberg.

 

Quelle:

de.wikipedia.org/wiki/Staffelberg_(Niederbayern)

 

Photography (see section below for etymology) is the art, science and practice of creating durable images by recording light or other electromagnetic radiation, either chemically by means of a light-sensitive material such as photographic film, or electronically by means of an image sensor.[1] Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. The result in an electronic image sensor is an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing.

 

The result in a photographic emulsion is an invisible latent image, which is later chemically developed into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.

 

Photography has many uses for business, science, manufacturing (e.g. photolithography), art, recreational purposes, and mass communication.

 

The word "photography" was created from the Greek roots φωτός (phōtos), genitive of φῶς (phōs), "light"[2] and γραφή (graphé) "representation by means of lines" or "drawing",[3] together meaning "drawing with light".[4]

 

Several people may have coined the same new term from these roots independently. Hercules Florence, a French painter and inventor living in Campinas, Brazil, used the French form of the word, photographie, in private notes which a Brazilian photography historian believes were written in 1834.[5] Johann von Maedler, a Berlin astronomer, is credited in a 1932 German history of photography as having used it in an article published on 25 February 1839 in the German newspaper Vossische Zeitung.[6] Both of these claims are now widely reported but apparently neither has ever been independently confirmed as beyond reasonable doubt. Credit has traditionally been given to Sir John Herschel both for coining the word and for introducing it to the public. His uses of it in private correspondence prior to 25 February 1839 and at his Royal Society lecture on the subject in London on 14 March 1839 have long been amply documented and accepted as settled fact.

 

History and evolution

Precursor technologies

Photography is the result of combining several technical discoveries. Long before the first photographs were made, Chinese philosopher Mo Di and Greek mathematicians Aristotle and Euclid described a pinhole camera in the 5th and 4th centuries BCE.[8][9] In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments,[10] Ibn al-Haytham (Alhazen) (965–1040) studied the camera obscura and pinhole camera,[9][11] Albertus Magnus (1193–1280) discovered silver nitrate,[12] and Georg Fabricius (1516–71) discovered silver chloride.[13] Techniques described in the Book of Optics are capable of producing primitive photographs using medieval materials. [14][15][16]

 

Daniele Barbaro described a diaphragm in 1566.[17] Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694.[18] The fiction book Giphantie, published in 1760, by French author Tiphaigne de la Roche, described what can be interpreted as photography.[17]

 

The discovery of the camera obscura that provides an image of a scene dates back to ancient China. Leonardo da Vinci mentions natural cameras obscura that are formed by dark caves on the edge of a sunlit valley. A hole in the cave wall will act as a pinhole camera and project a laterally reversed, upside down image on a piece of paper. So the birth of photography was primarily concerned with developing a means to fix and retain the image produced by the camera obscura.

 

The first success of reproducing images without a camera occurred when Thomas Wedgwood, from the famous family of potters, obtained copies of paintings on leather using silver salts. Since he had no way of permanently fixing those reproductions (stabilizing the image by washing out the non-exposed silver salts), they would turn completely black in the light and thus had to be kept in a dark room for viewing.

 

Renaissance painters used the camera obscura which, in fact, gives the optical rendering in color that dominates Western Art. The camera obscura literally means "dark chamber" in Latin. It is a box with a hole in it which allows light to go through and create an image onto the piece of paper.

 

First camera photography (1820s)

Invented in the early decades of the 19th century, photography by means of the camera seemed able to capture more detail and information than traditional media, such as painting and sculpture.[19] Photography as a usable process goes back to the 1820s with the development of chemical photography. The first permanent photoetching was an image produced in 1822 by the French inventor Nicéphore Niépce, but it was destroyed in a later attempt to make prints from it.[7] Niépce was successful again in 1825. He made the View from the Window at Le Gras, the earliest surviving photograph from nature (i.e., of the image of a real-world scene, as formed in a camera obscura by a lens), in 1826 or 1827.[20]

 

Because Niépce's camera photographs required an extremely long exposure (at least eight hours and probably several days), he sought to greatly improve his bitumen process or replace it with one that was more practical. Working in partnership with Louis Daguerre, he developed a somewhat more sensitive process that produced visually superior results, but it still required a few hours of exposure in the camera. Niépce died in 1833 and Daguerre then redirected the experiments toward the light-sensitive silver halides, which Niépce had abandoned many years earlier because of his inability to make the images he captured with them light-fast and permanent. Daguerre's efforts culminated in what would later be named the daguerreotype process, the essential elements of which were in place in 1837. The required exposure time was measured in minutes instead of hours. Daguerre took the earliest confirmed photograph of a person in 1838 while capturing a view of a Paris street: unlike the other pedestrian and horse-drawn traffic on the busy boulevard, which appears deserted, one man having his boots polished stood sufficiently still throughout the approximately ten-minute-long exposure to be visible. Eventually, France agreed to pay Daguerre a pension for his process in exchange for the right to present his invention to the world as the gift of France, which occurred on 19 August 1839.

Meanwhile, in Brazil, Hercules Florence had already created his own process in 1832, naming it Photographie, and an English inventor, William Fox Talbot, had created another method of making a reasonably light-fast silver process image but had kept his work secret. After reading about Daguerre's invention in January 1839, Talbot published his method and set about improving on it. At first, like other pre-daguerreotype processes, Talbot's paper-based photography typically required hours-long exposures in the camera, but in 1840 he created the calotype process, with exposures comparable to the daguerreotype. In both its original and calotype forms, Talbot's process, unlike Daguerre's, created a translucent negative which could be used to print multiple positive copies, the basis of most chemical photography up to the present day. Daguerreotypes could only be replicated by rephotographing them with a camera.[21] Talbot's famous tiny paper negative of the Oriel window in Lacock Abbey, one of a number of camera photographs he made in the summer of 1835, may be the oldest camera negative in existence.[22][23]

 

John Herschel made many contributions to the new field. He invented the cyanotype process, later familiar as the "blueprint". He was the first to use the terms "photography", "negative" and "positive". He had discovered in 1819 that sodium thiosulphate was a solvent of silver halides, and in 1839 he informed Talbot (and, indirectly, Daguerre) that it could be used to "fix" silver-halide-based photographs and make them completely light-fast. He made the first glass negative in late 1839.

 

In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper.

 

Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1884, George Eastman developed an early type of film to replace photographic plates, leading to the technology used by film cameras today.

 

In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize for Physics in 1908.

 

Black-and-white

See also: Monochrome photography

All photography was originally monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost and its "classic" photographic look. The tones and contrast between light and dark shadows define black and white photography.[24] It is important to note that some monochromatic pictures are not always pure blacks and whites, but also contain other hues depending on the process. The cyanotype process produces an image composed of blue tones. The albumen process, first used more than 150 years ago, produces brown tones.

 

Many photographers continue to produce some monochrome images, often because of the established archival permanence of well processed silver halide based materials. Some full color digital images are processed using a variety of techniques to create black and whites, and some manufacturers produce digital cameras that exclusively shoot monochrome.

 

Color

Color photography was explored beginning in the mid-19th century. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light.

 

The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by physicist James Clerk Maxwell in 1855. Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image.

 

Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s.

 

Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images.

 

The development of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability.

 

Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s.

 

Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multilayer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure.

 

Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently available color films still employ a multilayer emulsion and the same principles, most closely resembling Agfa's product.

 

Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963.

 

Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment.

 

Digital photography

Main article: Digital photography

See also: Digital camera and Digital versus film photography

In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born.

 

Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. [25] An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications.

  

Photography gained the interest of many scientists and artists from its inception. Scientists have used photography to record and study movements, such as Eadweard Muybridge's study of human and animal locomotion in 1887. Artists are equally interested by these aspects but also try to explore avenues other than the photo-mechanical representation of reality, such as the pictorialist movement.

 

Military, police, and security forces use photography for surveillance, recognition and data storage. Photography is used by amateurs to preserve memories, to capture special moments, to tell stories, to send messages, and as a source of entertainment. High speed photography allows for visualizing events that are too fast for the human eye.

 

Technical aspects

Main article: Camera

The camera is the image-forming device, and photographic film or a silicon electronic image sensor is the sensing medium. The respective recording medium can be the film itself, or a digital electronic or magnetic memory.[26]

 

Photographers control the camera and lens to "expose" the light recording material (such as film) to the required amount of light to form a "latent image" (on film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on paper or film.

 

The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera).

 

As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens.

 

The movie camera is a type of photographic camera which takes a rapid sequence of photographs on strips of film. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person's eyes and brain merge the separate pictures together to create the illusion of motion.[27]

 

Camera controls are interrelated. The total amount of light reaching the film plane (the 'exposure') changes with the duration of exposure, aperture of the lens, and on the effective focal length of the lens (which in variable focal length lenses, can force a change in aperture as the lens is zoomed). Changing any of these controls can alter the exposure. Many cameras may be set to adjust most or all of these controls automatically. This automatic functionality is useful for occasional photographers in many situations.

 

The duration of an exposure is referred to as shutter speed, often even in cameras that do not have a physical shutter, and is typically measured in fractions of a second. It is quite possible to have exposures from one up to several seconds, usually for still-life subjects, and for night scenes exposure times can be several hours. However, for a subject that is in motion use a fast shutter speed. This will prevent the photograph from coming out blurry.[29]

 

The effective aperture is expressed by an f-number or f-stop (derived from focal ratio), which is proportional to the ratio of the focal length to the diameter of the aperture. Longer lenses will pass less light even though the diameter of the aperture is the same due to the greater distance the light has to travel; shorter lenses (a shorter focal length) will be brighter with the same size of aperture.

 

The smaller the f/number, the larger the effective aperture. The present system of f/numbers to give the effective aperture of a lens was standardized by an international convention. There were earlier, different series of numbers in older cameras.

 

If the f-number is decreased by a factor of √2, the aperture diameter is increased by the same factor, and its area is increased by a factor of 2. The f-stops that might be found on a typical lens include 2.8, 4, 5.6, 8, 11, 16, 22, 32, where going up "one stop" (using lower f-stop numbers) doubles the amount of light reaching the film, and stopping down one stop halves the amount of light.

 

Image capture can be achieved through various combinations of shutter speed, aperture, and film or sensor speed. Different (but related) settings of aperture and shutter speed enable photographs to be taken under various conditions of film or sensor speed, lighting and motion of subjects and/or camera, and desired depth of field. A slower speed film will exhibit less "grain", and a slower speed setting on an electronic sensor will exhibit less "noise", while higher film and sensor speeds allow for a faster shutter speed, which reduces motion blur or allows the use of a smaller aperture to increase the depth of field.

 

For example, a wider aperture is used for lower light and a lower aperture for more light. If a subject is in motion, then a high shutter speed may be needed. A tripod can also be helpful in that it enables a slower shutter speed to be used.

 

For example, f/8 at 8 ms (1/125 of a second) and f/5.6 at 4 ms (1/250 of a second) yield the same amount of light. The chosen combination has an impact on the final result. The aperture and focal length of the lens determine the depth of field, which refers to the range of distances from the lens that will be in focus. A longer lens or a wider aperture will result in "shallow" depth of field (i.e. only a small plane of the image will be in sharp focus). This is often useful for isolating subjects from backgrounds as in individual portraits or macro photography.

 

Conversely, a shorter lens, or a smaller aperture, will result in more of the image being in focus. This is generally more desirable when photographing landscapes or groups of people. With very small apertures, such as pinholes, a wide range of distance can be brought into focus, but sharpness is severely degraded by diffraction with such small apertures. Generally, the highest degree of "sharpness" is achieved at an aperture near the middle of a lens's range (for example, f/8 for a lens with available apertures of f/2.8 to f/16). However, as lens technology improves, lenses are becoming capable of making increasingly sharp images at wider apertures.

 

Image capture is only part of the image forming process. Regardless of material, some process must be employed to render the latent image captured by the camera into a viewable image. With slide film, the developed film is just mounted for projection. Print film requires the developed film negative to be printed onto photographic paper or transparency. Digital images may be uploaded to an image server (e.g., a photo-sharing web site), viewed on a television, or transferred to a computer or digital photo frame. Every type can be printed on more "classical" mediums such as regular paper or photographic paper for examples.

 

Prior to the rendering of a viewable image, modifications can be made using several controls. Many of these controls are similar to controls during image capture, while some are exclusive to the rendering process. Most printing controls have equivalent digital concepts, but some create different effects. For example, dodging and burning controls are different between digital and film processes. Other printing modifications include:

Digital point-and-shoot cameras have become widespread consumer products, outselling film cameras, and including new features such as video and audio recording. Kodak announced in January 2004 that it would no longer sell reloadable 35 mm cameras in western Europe, Canada and the United States after the end of that year. Kodak was at that time a minor player in the reloadable film cameras market. In January 2006, Nikon followed suit and announced that they will stop the production of all but two models of their film cameras: the low-end Nikon FM10, and the high-end Nikon F6. On 25 May 2006, Canon announced they will stop developing new film SLR cameras.[34] Though most new camera designs are now digital, a new 6x6cm/6x7cm medium format film camera was introduced in 2008 in a cooperation between Fuji and Voigtländer.[35][36]

 

According to a survey made by Kodak in 2007 when the majority of photography was already digital, 75 percent of professional photographers say they will continue to use film, even though some embrace digital.[37]

 

The PMA say that in the year 2000 nearly a billion rolls of film were sold each year and by 2011 a mere 20 million rolls, plus 31 million single-use cameras.[38]

 

Quelle:

en.wikipedia.org/wiki/Photography

de.wikipedia.org/wiki/Fotografie

   

Der Feurige Perlmutterfalter (Argynnis adippe) ist ein Schmetterling (Tagfalter) aus der Familie der Edelfalter (Nymphalidae). Er wird in der deutschsprachigen Literatur auch als Adippe-Perlmutterfalter, Feuriger Perlmuttfalter, Feuriger Waldhügelland-Perlmutterfalter, Märzveilchenfalter, Märzveilchen-Perlmutterfalter und Hundsveilchen-Perlmutterfalter bezeichnet.

  

Die Falter erreichen eine Flügelspannweite von 40 bis 45 Millimetern. Die Flügeloberseiten sind wie bei vielen Perlmutterfaltern orange gefärbt und haben eine schwarze Musterung. Die Männchen tragen auf der Oberseite der Vorderflügel an den Adern Cu1 und Cu2 gut erkennbare Duftschuppenstreifen und auf den Hinterflügeln einen Haarkamm auf der Radialader. Auf der orangen Unterseite der Hinterflügel sind neben einigen fein schwarz gerandeten Perlmutterflecken kleine weiße Flecken, die braun gerandet sind, in der Postdiskalregion charakteristisch. Diese fehlen dem ähnlichen Großen Perlmutterfalter (Argynnis aglaja). Am Außenrand befindet sich eine Reihe, größerer Perlmutterflecken. Diese Flecken sind an der nach innen gerichteten Seite leicht spitz zulaufend und bräunlich gerandet.[1]

  

Die Raupen haben eine gräuliche Grundfarbe. Am Rücken verläuft eine breite schwarze Längsbinde, die zwischen jedem Segment unterbrochen ist. Dies unterscheidet die Tiere von den ähnlichen Raupen des Mittleren Perlmutterfalters (Argynnis niobe). Innerhalb der Binde verläuft genau in der Mitte des Rückens eine feine, ebenfalls unterbrochene helle Linie. Die Kopfkapsel ist ebenso, wie die zahlreichen verästelten Dornen am Körper braun gefärbt.

  

Argynnis adippe auresiana Fruhstorfer, 1908. Die grüne Bestäubung auf der Hinterflügelunterseite ist dunkler und die basalen Perlmutterflecken sind kleiner oder fehlen. Bei den Männchen sind die Duftschuppenflecken der Vorderflügel und der Haarkamm des Hinterflügels kleiner. Es wird eine Generation pro Jahr gebildet, die von Juni bis Anfang August fliegt. Die Unterart bevorzugt offene, trockene Stellen mit Strauchbewuchs, blumenreiche Hänge, offene Wälder und felsige Schluchten mit geringer Vegetation. Sie kommt in Nordafrika und Algerien vor.[1]

Argynnis adippe f. cleodexa Ochsenheimer, 1816. Die Falter haben auf der Unterseite außer in den Kernen der braunen Flecken der Postdiskalregion keine Silberflecke. Das Verhältnis der beiden Formen f. cleodexa und f. adippe ist regional sehr verschieden. In Nord- und Nordosteuropa (England, Belgien, Niederlande, Dänemark, Fennoskandinavien) dominiert f. adippe sehr stark und f. cleodexa ist sehr selten. In Zentralfrankreich, Deutschland, Österreich und der Nordschweiz wird f. cleodexa etwas häufiger angetroffen und überwiegt dann in der Südschweiz, Norditalien, Slowenien und in den Apenninen. Auf Sizilien scheint dann f. adippe ganz zu fehlen. Obwohl f. cleodexa in den Pyrenäen häufig ist, ist die Form ohne Silberflecke in Spanien wieder selten. Von Ungarn nach Südosten über die Balkanhalbinsel wird f. cleodexa immer häufiger bis in Griechenland f. adippe nicht mehr vorkommt.

Die Tiere kommen von Nordwestafrika über nahezu ganz Europa über das gemäßigte Asien, östlich bis nach Japan vor. Sie fehlen nördlich des Polarkreises, auf großen Teilen der Britischen Inseln und auf den meisten Mittelmeerinseln. Man findet sie in trockenen, grasigen und bebuschten Gebieten und an Rändern und Lichtungen lockerer Wälder.[1] Verbuschung macht ihnen, im Gegensatz zum Großen Perlmutterfalter nichts aus.

Die Imagines saugen Nektar an Korbblütlern und Disteln und sind insbesondere am Waldrand in Gruppen zu finden.

  

Flug- und Raupenzeiten

Die Falter fliegen in einer Generation von Mitte Juni bis August. Die Raupen findet man ab August und nach der Überwinterung bis Juni.

  

Nahrung der Raupen

Die Raupen ernähren sich von Veilchen (Viola).

Die Weibchen legen ihre kegelförmigen, längsgerippten Eier an den Blättern der Futterpflanzen ab. Die Raupen überwintern bereits zum Schlupf entwickelt im Ei, schlüpfen aber erst im Frühjahr. Die Verpuppung erfolgt in einer plumpen Stürzpuppe an kräftigen Stängeln in Bodennähe. Die Puppe ist braun und trägt mehrere silbrige Metallflecken und ist deutlich glatter und hat weniger ausgebildete Zacken als jene des Kaisermantels (Argynnis paphia).

The High Brown Fritillary (Fabriciana adippe) is a butterfly of the Nymphalidae family, native from Europe across mainland Asia to Japan. The adults fly in July/August and lay eggs near to the larval food plants which are species of violets, (similar to the Pearl bordered fritillary). The eggs are often laid in places where there are dead bracken on the ground or in areas where the underlying rock is limestone the eggs may be laid in moss overlying rocks. The mosaics are typically one-third grass and two-thirds bracken. It likes drier conditions (but not as dry as the Queen of Spain Fritillary) than its more common relative Argynnis aglaja, preferring sandy or rocky hills and banks with patches of the foodplant for the larvae. It is among the first butterfly species to disappear when the vegetation becomes too lush.

  

Bugle, Bramble and thistle flowers are favourite nectar sources, for the adult.

  

This species has legal protection in the UK under the 1981 Wildlife and Countryside Act. The UK distribution can be found on the NBN website here

  

Quelle:

en.wikipedia.org/wiki/High_Brown_Fritillary

de.wikipedia.org/wiki/Feuriger_Perlmutterfalter#cite_note...

 

Fotografie oder Photographie (aus griechisch φῶς, phos, im Genitiv: φωτός, photos, „Licht (der Himmelskörper)“, „Helligkeit“ und γράφειν, graphein, „zeichnen“, „ritzen“, „malen“, „schreiben“) bezeichnet

  

eine bildgebende Methode,[1] bei der mit Hilfe von optischen Verfahren ein Lichtbild auf ein lichtempfindliches Medium projiziert und dort direkt und dauerhaft gespeichert (analoges Verfahren) oder in elektronische Daten gewandelt und gespeichert wird (digitales Verfahren).

das dauerhafte Lichtbild (Diapositiv, Filmbild oder Papierbild; kurz Bild, umgangssprachlich auch Foto genannt), das durch fotografische Verfahren hergestellt wird; dabei kann es sich entweder um ein Positiv oder ein Negativ auf Film, Folie, Papier oder anderen fotografischen Trägern handeln. Fotografische Aufnahmen werden als Abzug, Vergrößerung, Filmkopie oder als Ausbelichtung bzw. Druck von digitalen Bild-Dateien vervielfältigt. Der entsprechende Beruf ist der Fotograf.

Bilder, die für das Kino aufgenommen werden. Beliebig viele fotografische Bilder werden in Reihen von Einzelbildern auf Film aufgenommen, die später mit einem Filmprojektor als bewegte Bilder (Laufbilder) vorgeführt werden können (siehe Film).

  

Der Begriff Photographie wurde erstmals (noch vor englischen oder französischen Veröffentlichungen) am 25. Februar 1839 vom Astronomen Johann Heinrich von Mädler in der Vossischen Zeitung verwendet.[2] Bis ins 20. Jahrhundert bezeichnete Fotografie alle Bilder, welche rein durch Licht auf einer chemisch behandelten Oberfläche entstehen. Mit der deutschen Rechtschreibreform 1901 wurde die Schreibweise „Fotografie“ empfohlen, was sich jedoch bis heute nicht ganz durchsetzen konnte. Gemischte Schreibungen wie „Fotographie“ oder „Photografie“ sowie daraus abgewandelte Adjektive oder Substantive waren jedoch zu jeder Zeit eine falsche Schreibweise.

  

Allgemeines

Die Fotografie ist ein Medium, das in sehr verschiedenen Zusammenhängen eingesetzt wird. Fotografische Abbildungen können beispielsweise Gegenstände mit primär künstlerischem (künstlerische Fotografie) oder primär kommerziellem Charakter sein (Industriefotografie, Werbe- und Modefotografie). Die Fotografie kann unter künstlerischen, technischen (Fototechnik), ökonomischen (Fotowirtschaft) und gesellschaftlich-sozialen (Amateur-, Arbeiter- und Dokumentarfotografie) Aspekten betrachtet werden. Des Weiteren werden Fotografien im Journalismus und in der Medizin verwendet.

  

Die Fotografie ist teilweise ein Gegenstand der Forschung und Lehre in der Kunstgeschichte und der noch jungen Bildwissenschaft. Der mögliche Kunstcharakter der Fotografie war lange Zeit umstritten, ist jedoch seit der fotografischen Stilrichtung des Pictorialismus um die Wende zum 20. Jahrhundert letztlich nicht mehr bestritten. Einige Forschungsrichtungen ordnen die Fotografie der Medien- oder Kommunikationswissenschaft zu, auch diese Zuordnung ist umstritten.

  

Im Zuge der technologischen Weiterentwicklung fand zu Beginn des 21. Jahrhunderts allmählich der Wandel von der klassischen analogen (Silber-)Fotografie hin zur Digitalfotografie statt. Der weltweite Zusammenbruch der damit in Zusammenhang stehenden Industrie für analoge Kameras aber auch für Verbrauchsmaterialien (Filme, Fotopapier, Fotochemie, Laborgeräte) führt dazu, dass die Fotografie mehr und mehr auch unter kulturwissenschaftlicher und kulturhistorischer Sicht erforscht wird. Allgemein kulturelle Aspekte in der Forschung sind z.B. Betrachtungen über den Erhalt und die Dokumentation der praktischen Kenntnis der fotografischen Verfahren für Aufnahme und Verarbeitung aber auch der Wandel im Umgang mit der Fotografie im Alltag. Zunehmend kulturhistorisch interessant werden die Archivierungs- und Erhaltungstechniken für analoge Aufnahmen aber auch die systemunabhängige langfristige digitale Datenspeicherung.

  

Die Fotografie unterliegt dem komplexen und vielschichtigen Fotorecht; bei der Nutzung von vorhandenen Fotografien sind die Bildrechte zu beachten.

  

Fototechnik

Prinzipiell wird meist mit Hilfe eines optischen Systems, in vielen Fällen einem Objektiv, fotografiert. Dieses wirft das von einem Objekt ausgesendete oder reflektierte Licht auf die lichtempfindliche Schicht einer Fotoplatte, eines Films oder auf einen fotoelektrischen Wandler, einen Bildsensor.

  

→ Hauptartikel: Fototechnik

Fotografische Kameras

→ Hauptartikel: Kamera

Der fotografischen Aufnahme dient eine fotografische Apparatur (Kamera). Durch Manipulation des optischen Systems (unter anderem die Einstellung der Blende, Scharfstellung, Farbfilterung, die Wahl der Belichtungszeit, der Objektivbrennweite, der Beleuchtung und nicht zuletzt des Aufnahmematerials) stehen dem Fotografen oder Kameramann zahlreiche Gestaltungsmöglichkeiten offen. Als vielseitigste Fotoapparatbauform hat sich sowohl im Analog- als auch im Digitalbereich die Spiegelreflexkamera durchgesetzt. Für viele Aufgaben werden weiterhin die verschiedensten Spezialkameras benötigt und eingesetzt.

  

Lichtempfindliche Schicht

Bei der filmbasierten Fotografie (z. B. Silber-Fotografie) ist die lichtempfindliche Schicht auf der Bildebene eine Dispersion (im allgemeinen Sprachgebrauch Emulsion). Sie besteht aus einem Gel, in dem gleichmäßig kleine Körnchen eines Silberhalogenids (zum Beispiel Silberbromid) verteilt sind. Je kleiner die Körnung ist, umso weniger lichtempfindlich ist die Schicht (siehe ISO-5800-Standard), umso besser ist allerdings die Auflösung („Korn“). Dieser lichtempfindlichen Schicht wird durch einen Träger Stabilität verliehen. Trägermaterialien sind Zelluloseacetat, früher diente dazu Zellulosenitrat (Zelluloid), Kunststofffolien, Metallplatten, Glasplatten und sogar Textilien (siehe Fotoplatte und Film).

  

Bei der Digitalfotografie besteht das Äquivalent der lichtempfindlichen Schicht aus Chips wie CCD- oder CMOS-Sensoren.

  

Entwicklung und Fixierung

Durch das Entwickeln bei der filmbasierten Fotografie wird auf chemischem Wege das latente Bild sichtbar gemacht. Beim Fixieren werden die nicht belichteten Silberhalogenid-Körnchen wasserlöslich gemacht und anschließend mit Wasser herausgewaschen, sodass ein Bild bei Tageslicht betrachtet werden kann, ohne dass es nachdunkelt.

  

Ein weiteres älteres Verfahren ist das Staubverfahren, mit dem sich einbrennbare Bilder auf Glas und Porzellan herstellen lassen.

  

Ein digitales Bild muss nicht entwickelt werden; es wird elektronisch gespeichert und kann anschließend mit der elektronischen Bildbearbeitung am Computer bearbeitet und bei Bedarf auf Fotopapier ausbelichtet oder beispielsweise mit einem Tintenstrahldrucker ausgedruckt werden. Bei der Weiterverarbeitung von Rohdaten spricht man auch hier von Entwicklung.

  

Der Abzug

Als Abzug bezeichnet man das Ergebnis einer Kontaktkopie, einer Vergrößerung, oder einer Ausbelichtung; dabei entsteht in der Regel ein Papierbild. Abzüge können von Filmen (Negativ oder Dia) oder von Dateien gefertigt werden.

  

Abzüge als Kontaktkopie haben dieselbe Größe wie die Abmessungen des Aufnahmeformats; wird eine Vergrößerung vom Negativ oder Positiv angefertigt, beträgt die Größe des entstehenden Bildes ein Vielfaches der Größe der Vorlage, dabei wird jedoch in der Regel das Seitenverhältnis beibehalten, das bei der klassischen Fotografie bei 1,5 bzw. 3:2 oder in USA 4:5 liegt.

Eine Ausnahme davon stellt die Ausschnittvergrößerung dar, deren Seitenverhältnis in der Bühne eines Vergrößerers beliebig festgelegt werden kann; allerdings wird auch die Ausschnittvergrößerung in der Regel auf ein Papierformat mit bestimmten Abmessungen belichtet.

  

Der Abzug ist eine häufig gewählte Präsentationsform der Amateurfotografie, die in speziellen Kassetten oder Alben gesammelt werden. Bei der Präsentationsform der Diaprojektion arbeitet man in der Regel mit dem Original-Diapositiv, also einem Unikat, während es sich bei Abzügen immer um Kopien handelt.

  

Geschichte der Fotografie

→ Hauptartikel: Geschichte und Entwicklung der Fotografie

Vorläufer und Vorgeschichte[Bearbeiten]

Der Name Kamera leitet sich vom Vorläufer der Fotografie, der Camera obscura („Dunkle Kammer“) ab, die bereits seit dem 11. Jahrhundert bekannt ist und Ende des 13. Jahrhunderts von Astronomen zur Sonnenbeobachtung eingesetzt wurde. Anstelle einer Linse weist diese Kamera nur ein kleines Loch auf, durch das die Lichtstrahlen auf eine Projektionsfläche fallen, von der das auf dem Kopf stehende, seitenverkehrte Bild abgezeichnet werden kann. In Edinburgh und Greenwich bei London sind begehbare, raumgroße Camerae obscurae eine Touristenattraktion. Auch das Deutsche Filmmuseum hat eine Camera obscura, in der ein Bild des gegenüberliegenden Mainufers projiziert wird.

  

Ein Durchbruch ist 1550 die Wiedererfindung der Linse, mit der hellere und gleichzeitig schärfere Bilder erzeugt werden können. 1685: Ablenkspiegel, ein Abbild kann so auf Papier gezeichnet werden.

  

Im 18. Jahrhundert kamen die Laterna magica, das Panorama und das Diorama auf. Chemiker wie Humphry Davy begannen bereits, lichtempfindliche Stoffe zu untersuchen und nach Fixiermitteln zu suchen.

  

Die frühen Verfahren

Die vermutlich erste Fotografie der Welt wurde im Frühherbst 1826 durch Joseph Nicéphore Nièpce im Heliografie-Verfahren angefertigt. 1837 benutzte Louis Jacques Mandé Daguerre ein besseres Verfahren, das auf der Entwicklung der Fotos mit Hilfe von Quecksilber-Dämpfen und anschließender Fixierung in einer heißen Kochsalzlösung oder einer normal temperierten Natriumthiosulfatlösung beruhte. Die auf diese Weise hergestellten Bilder, allesamt Unikate auf versilberten Kupferplatten, wurden als Daguerreotypien bezeichnet. Bereits 1835 hatte der Engländer William Fox Talbot das Negativ-Positiv-Verfahren erfunden. Auch heute werden noch manche der historischen Verfahren als Edeldruckverfahren in der Bildenden Kunst und künstlerischen Fotografie verwendet.

  

Im Jahr 1883 erschien in der bedeutenden Leipziger Wochenzeitschrift Illustrirte Zeitung zum ersten Mal in einer deutschen Publikation ein gerastertes Foto in Form einer Autotypie, einer um 1880 erfolgten Erfindung von Georg Meisenbach.

  

20. Jahrhundert

Fotografien konnten zunächst nur als Unikate hergestellt werden, mit der Einführung des Negativ-Positiv-Verfahrens war eine Vervielfältigung im Kontaktverfahren möglich. Die Größe des fertigen Fotos entsprach in beiden Fällen dem Aufnahmeformat, was sehr große, unhandliche Kameras erforderte. Mit dem Rollfilm und insbesondere der von Oskar Barnack bei den Leitz Werken entwickelten und 1924 eingeführten Kleinbildkamera, die den herkömmlichen 35-mm-Kinofilm verwendete, entstanden völlig neue Möglichkeiten für eine mobile, schnelle Fotografie. Obwohl, durch das kleine Format bedingt, zusätzliche Geräte zur Vergrößerung erforderlich wurden, und die Bildqualität mit den großen Formaten bei Weitem nicht mithalten konnte, setzte sich das Kleinbild in den meisten Bereichen der Fotografie als Standardformat durch.

  

Analogfotografie

→ Hauptartikel: Analogfotografie

Begriff

Zur Abgrenzung gegenüber den neuen fotografischen Verfahren der Digitalfotografie tauchte zu Beginn des 21. Jahrhunderts[3] der Begriff Analogfotografie oder stattdessen auch die zu diesem Zeitpunkt bereits veraltete Schreibweise Photographie wieder auf.

  

Um der Öffentlichkeit ab 1990 die seinerzeit neue Technologie der digitalen Speicherung von Bilddateien zu erklären, verglich man sie in einigen Publikationen technisch mit der bis dahin verwendeten analogen Bildspeicherung der Still-Video-Kamera. Durch Übersetzungsfehler und Fehlinterpretationen, sowie durch den bis dahin noch allgemein vorherrschenden Mangel an technischem Verständnis über die digitale Kameratechnik, bezeichneten einige Journalisten danach irrtümlich auch die bisherigen klassischen Film-basierten Kamerasysteme als Analogkameras[4][5].

  

Der Begriff hat sich bis heute erhalten und bezeichnet nun fälschlich nicht mehr die Fotografie mittels analoger Speichertechnik in den ersten digitalen Still-Video-Kameras, sondern nur noch die Technik der Film-basierten Fotografie. Bei dieser wird aber weder digital noch analog 'gespeichert', sondern chemisch/physikalisch fixiert.

  

Allgemeines

Eine Fotografie kann weder analog noch digital sein. Lediglich die Bildinformation kann punktuell mittels physikalischer, analog messbarer Signale (Densitometrie, Spektroskopie) bestimmt und gegebenenfalls nachträglich digitalisiert werden.

  

Nach der Belichtung des Films liegt die Bildinformation zunächst nur latent vor. Gespeichert wird diese Information nicht in der Analogkamera sondern erst bei der Entwicklung des Films mittels chemischer Reaktion in einer dreidimensionalen Gelatineschicht (Film hat mehrere übereinander liegende Sensibilisierungsschichten). Die Bildinformation liegt danach auf dem ursprünglichen Aufnahmemedium (Diapositiv oder Negativ) unmittelbar vor. Sie ist ohne weitere Hilfsmittel als Fotografie (Unikat) in Form von entwickelten Silberhalogeniden bzw. Farbkupplern sichtbar. Gegebenenfalls kann aus solchen Fotografien in einem zweiten chemischen Prozess im Fotolabor ein Papierbild erzeugt werden, bzw. kann dies nun auch durch Einscannen und Ausdrucken erfolgen.

  

Bei der digitalen Speicherung werden die analogen Signale aus dem Kamerasensor in einer zweiten Stufe digitalisiert und werden damit elektronisch interpretier- und weiterverarbeitbar. Die digitale Bildspeicherung mittels Analog-Digital-Wandler nach Auslesen aus dem Chip der Digitalkamera arbeitet (vereinfacht) mit einer lediglich zweidimensional erzeugten digitalen Interpretation der analogen Bildinformation und erzeugt eine beliebig oft (praktisch verlustfrei) kopierbare Datei in Form von differentiell ermittelten digitalen Absolutwerten. Diese Dateien werden unmittelbar nach der Aufnahme innerhalb der Kamera in Speicherkarten abgelegt. Mittels geeigneter Bildbearbeitungssoftware können diese Dateien danach ausgelesen, weiter verarbeitet und auf einem Monitor oder Drucker als sichtbare Fotografie ausgegeben werden.

  

Digitalfotografie

  

Die erste CCD (Charge-coupled Device) Still-Video-Kamera wurde 1970 von Bell konstruiert und 1972 meldet Texas Instruments das erste Patent auf eine filmlose Kamera an, welche einen Fernsehbildschirm als Sucher verwendet.

  

1973 produzierte Fairchild Imaging das erste kommerzielle CCD mit einer Auflösung von 100 × 100 Pixel.

  

Dieses CCD wurde 1975 in der ersten funktionstüchtigen digitalen Kamera von Kodak benutzt. Entwickelt hat sie der Erfinder Steven Sasson. Diese Kamera wog 3,6 Kilogramm, war größer als ein Toaster und benötigte noch 23 Sekunden, um ein Schwarz-Weiß-Bild mit 100x100 Pixeln Auflösung auf eine digitale Magnetbandkassette zu übertragen; um das Bild auf einem Bildschirm sichtbar zu machen, bedurfte es weiterer 23 Sekunden.

  

1986 stellte Canon mit der RC-701 die erste kommerziell erhältliche Still-Video-Kamera mit magnetischer Aufzeichnung der Bilddaten vor, Minolta präsentierte den Still Video Back SB-90/SB-90S für die Minolta 9000; durch Austausch der Rückwand der Kleinbild-Spiegelreflexkamera wurde aus der Minolta 9000 eine digitale Spiegelreflexkamera; gespeichert wurden die Bilddaten auf 2-Zoll-Disketten.

  

1987 folgten weitere Modelle der RC-Serie von Canon sowie digitale Kameras von Fujifilm (ES-1), Konica (KC-400) und Sony (MVC-A7AF). 1988 folgte Nikon mit der QV-1000C und 1990 sowie 1991 Kodak mit dem DCS (Digital Camera System) sowie Rollei mit dem Digital Scan Pack. Ab Anfang der 1990er Jahre kann die Digitalfotografie im kommerziellen Bildproduktionsbereich als eingeführt betrachtet werden.

  

Die digitale Fotografie revolutionierte die Möglichkeiten der digitalen Kunst, erleichtert insbesondere aber auch Fotomanipulationen.

  

Die Photokina 2006 zeigt, dass die Zeit der filmbasierten Kamera endgültig vorbei ist.[6] Im Jahr 2007 sind weltweit 91 Prozent aller verkauften Fotokameras digital,[7] die herkömmliche Fotografie auf Filmen schrumpft auf Nischenbereiche zusammen. Im Jahr 2011 besaßen rund 45,4 Millionen Personen in Deutschland einen digitalen Fotoapparat im Haushalt und im gleichen Jahr wurden in Deutschland rund 8,57 Millionen Digitalkameras verkauft.[8]

  

Siehe auch: Chronologie der Fotografie und Geschichte und Entwicklung der Fotografie

Fotografie als Kunst

  

Der Kunstcharakter der Fotografie war lange Zeit umstritten; zugespitzt formuliert der Kunsttheoretiker Karl Pawek in seinem Buch „Das optische Zeitalter“ (Olten/Freiburg i. Br. 1963, S. 58): „Der Künstler erschafft die Wirklichkeit, der Fotograf sieht sie.“

  

Diese Auffassung betrachtet die Fotografie nur als ein technisches, standardisiertes Verfahren, mit dem eine Wirklichkeit auf eine objektive, quasi „natürliche“ Weise abgebildet wird, ohne das dabei gestalterische und damit künstlerische Aspekte zum Tragen kommen: „die Erfindung eines Apparates zum Zwecke der Produktion … (perspektivischer) Bilder hat ironischerweise die Überzeugung … verstärkt, dass es sich hierbei um die natürliche Repräsentationsform handele. Offenbar ist etwas natürlich, wenn wir eine Maschine bauen können, die es für uns erledigt.“[9] Fotografien dienten gleichwohl aber schon bald als Unterrichtsmittel bzw. Vorlage in der Ausbildung bildender Künstler (Études d’après nature).

  

Schon in Texten des 19. Jahrhunderts wurde aber auch bereits auf den Kunstcharakter der Fotografie hingewiesen, der mit einem ähnlichen Einsatz der Technik wie bei anderen anerkannten zeitgenössische grafische Verfahren (Aquatinta, Radierung, Lithografie, …) begründet wird. Damit wird auch die Fotografie zu einem künstlerischen Verfahren, mit dem ein Fotograf eigene Bildwirklichkeiten erschafft.[10]

  

Auch zahlreiche Maler des 19. Jahrhunderts, wie etwa Eugène Delacroix, erkannten dies und nutzten Fotografien als Mittel zur Bildfindung und Gestaltung, als künstlerisches Entwurfsinstrument für malerische Werke, allerdings weiterhin ohne ihr einen eigenständigen künstlerischen Wert zuzusprechen.

  

Der Fotograf Henri Cartier-Bresson, selbst als Maler ausgebildet, wollte die Fotografie ebenfalls nicht als Kunstform, sondern als Handwerk betrachtet wissen: „Die Fotografie ist ein Handwerk. Viele wollen daraus eine Kunst machen, aber wir sind einfach Handwerker, die ihre Arbeit gut machen müssen.“ Gleichzeitig nahm er aber für sich auch das Bildfindungskonzept des entscheidenden Augenblickes in Anspruch, das ursprünglich von Gotthold Ephraim Lessing dramenpoetologisch ausgearbeitet wurde. Damit bezieht er sich unmittelbar auf ein künstlerisches Verfahren zur Produktion von Kunstwerken. Cartier-Bressons Argumentation diente also einerseits der poetologischen Nobilitierung, andererseits der handwerklichen Immunisierung gegenüber einer Kritik, die die künstlerische Qualität seiner Werke anzweifeln könnte. So wurden gerade Cartier-Bressons Fotografien sehr früh in Museen und Kunstausstellungen gezeigt, so zum Beispiel in der MoMa-Retrospektive (1947) und der Louvre-Ausstellung (1955).

  

Fotografie wurde bereits früh als Kunst betrieben (Julia Margaret Cameron, Lewis Carroll und Oscar Gustave Rejlander in den 1860ern). Der entscheidende Schritt zur Anerkennung der Fotografie als Kunstform ist den Bemühungen von Alfred Stieglitz (1864–1946) zu verdanken, der mit seinem Magazin Camera Work den Durchbruch vorbereitete.

  

Erstmals trat die Fotografie in Deutschland in der Werkbund-Ausstellung 1929 in Stuttgart in beachtenswertem Umfang mit internationalen Künstlern wie Edward Weston, Imogen Cunningham und Man Ray an die Öffentlichkeit; spätestens seit den MoMA-Ausstellungen von Edward Steichen (The Family of Man, 1955) und John Szarkowski (1960er) ist Fotografie als Kunst von einem breiten Publikum anerkannt, wobei gleichzeitig der Trend zur Gebrauchskunst begann.

  

Im Jahr 1977 stellte die documenta 6 in Kassel erstmals als international bedeutende Ausstellung in der berühmten Abteilung Fotografie die Arbeiten von historischen und zeitgenössischen Fotografen aus der gesamten Geschichte der Fotografie in den vergleichenden Kontext zur zeitgenössischen Kunst im Zusammenhang mit den in diesem Jahr begangenen „150 Jahren Fotografie“.

  

Heute ist Fotografie als vollwertige Kunstform akzeptiert. Indikatoren dafür sind die wachsende Anzahl von Museen, Sammlungen und Forschungseinrichtungen für Fotografie, die Zunahme der Professuren für Fotografie sowie nicht zuletzt der gestiegene Wert von Fotografien in Kunstauktionen und Sammlerkreisen. Zahlreiche Gebiete haben sich entwickelt, so die Landschafts-, Akt-, Industrie-, Theaterfotografie und andere mehr, die innerhalb der Fotografie eigene Wirkungsfelder entfaltet haben. Daneben entwickelt sich die künstlerische Fotomontage zu einem der malenden Kunst gleichwertigen Kunstobjekt. Neben der steigenden Anzahl von Fotoausstellungen und deren Besucherzahlen wird die Popularität moderner Fotografie auch in den erzielten Verkaufspreisen auf Kunstauktionen sichtbar. Fünf der zehn Höchstgebote für moderne Fotografie wurden seit 2010 auf Auktionen erzielt. Die aktuell teuerste Fotografie "Rhein II" von Andreas Gursky wurde im November 2011 auf einer Kunstauktion in New York für 4,3 Millionen Dollar versteigert.[11] Neuere Diskussionen innerhalb der Foto- und Kunstwissenschaften verweisen indes auf eine zunehmende Beliebigkeit bei der Kategorisierung von Fotografie. Zunehmend werde demnach von der Kunst und ihren Institutionen absorbiert, was einst ausschließlich in die angewandten Bereiche der Fotografie gehört habe.

  

Photography (see section below for etymology) is the art, science and practice of creating durable images by recording light or other electromagnetic radiation, either chemically by means of a light-sensitive material such as photographic film, or electronically by means of an image sensor.[1] Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. The result in an electronic image sensor is an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing.

  

The result in a photographic emulsion is an invisible latent image, which is later chemically developed into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.

  

Photography has many uses for business, science, manufacturing (e.g. photolithography), art, recreational purposes, and mass communication.

  

The word "photography" was created from the Greek roots φωτός (phōtos), genitive of φῶς (phōs), "light"[2] and γραφή (graphé) "representation by means of lines" or "drawing",[3] together meaning "drawing with light".[4]

  

Several people may have coined the same new term from these roots independently. Hercules Florence, a French painter and inventor living in Campinas, Brazil, used the French form of the word, photographie, in private notes which a Brazilian photography historian believes were written in 1834.[5] Johann von Maedler, a Berlin astronomer, is credited in a 1932 German history of photography as having used it in an article published on 25 February 1839 in the German newspaper Vossische Zeitung.[6] Both of these claims are now widely reported but apparently neither has ever been independently confirmed as beyond reasonable doubt. Credit has traditionally been given to Sir John Herschel both for coining the word and for introducing it to the public. His uses of it in private correspondence prior to 25 February 1839 and at his Royal Society lecture on the subject in London on 14 March 1839 have long been amply documented and accepted as settled fact.

  

History and evolution

Precursor technologies

Photography is the result of combining several technical discoveries. Long before the first photographs were made, Chinese philosopher Mo Di and Greek mathematicians Aristotle and Euclid described a pinhole camera in the 5th and 4th centuries BCE.[8][9] In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments,[10] Ibn al-Haytham (Alhazen) (965–1040) studied the camera obscura and pinhole camera,[9][11] Albertus Magnus (1193–1280) discovered silver nitrate,[12] and Georg Fabricius (1516–71) discovered silver chloride.[13] Techniques described in the Book of Optics are capable of producing primitive photographs using medieval materials. [14][15][16]

  

Daniele Barbaro described a diaphragm in 1566.[17] Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694.[18] The fiction book Giphantie, published in 1760, by French author Tiphaigne de la Roche, described what can be interpreted as photography.[17]

  

The discovery of the camera obscura that provides an image of a scene dates back to ancient China. Leonardo da Vinci mentions natural cameras obscura that are formed by dark caves on the edge of a sunlit valley. A hole in the cave wall will act as a pinhole camera and project a laterally reversed, upside down image on a piece of paper. So the birth of photography was primarily concerned with developing a means to fix and retain the image produced by the camera obscura.

  

The first success of reproducing images without a camera occurred when Thomas Wedgwood, from the famous family of potters, obtained copies of paintings on leather using silver salts. Since he had no way of permanently fixing those reproductions (stabilizing the image by washing out the non-exposed silver salts), they would turn completely black in the light and thus had to be kept in a dark room for viewing.

  

Renaissance painters used the camera obscura which, in fact, gives the optical rendering in color that dominates Western Art. The camera obscura literally means "dark chamber" in Latin. It is a box with a hole in it which allows light to go through and create an image onto the piece of paper.

  

First camera photography (1820s)

Invented in the early decades of the 19th century, photography by means of the camera seemed able to capture more detail and information than traditional media, such as painting and sculpture.[19] Photography as a usable process goes back to the 1820s with the development of chemical photography. The first permanent photoetching was an image produced in 1822 by the French inventor Nicéphore Niépce, but it was destroyed in a later attempt to make prints from it.[7] Niépce was successful again in 1825. He made the View from the Window at Le Gras, the earliest surviving photograph from nature (i.e., of the image of a real-world scene, as formed in a camera obscura by a lens), in 1826 or 1827.[20]

  

Because Niépce's camera photographs required an extremely long exposure (at least eight hours and probably several days), he sought to greatly improve his bitumen process or replace it with one that was more practical. Working in partnership with Louis Daguerre, he developed a somewhat more sensitive process that produced visually superior results, but it still required a few hours of exposure in the camera. Niépce died in 1833 and Daguerre then redirected the experiments toward the light-sensitive silver halides, which Niépce had abandoned many years earlier because of his inability to make the images he captured with them light-fast and permanent. Daguerre's efforts culminated in what would later be named the daguerreotype process, the essential elements of which were in place in 1837. The required exposure time was measured in minutes instead of hours. Daguerre took the earliest confirmed photograph of a person in 1838 while capturing a view of a Paris street: unlike the other pedestrian and horse-drawn traffic on the busy boulevard, which appears deserted, one man having his boots polished stood sufficiently still throughout the approximately ten-minute-long exposure to be visible. Eventually, France agreed to pay Daguerre a pension for his process in exchange for the right to present his invention to the world as the gift of France, which occurred on 19 August 1839.

Meanwhile, in Brazil, Hercules Florence had already created his own process in 1832, naming it Photographie, and an English inventor, William Fox Talbot, had created another method of making a reasonably light-fast silver process image but had kept his work secret. After reading about Daguerre's invention in January 1839, Talbot published his method and set about improving on it. At first, like other pre-daguerreotype processes, Talbot's paper-based photography typically required hours-long exposures in the camera, but in 1840 he created the calotype process, with exposures comparable to the daguerreotype. In both its original and calotype forms, Talbot's process, unlike Daguerre's, created a translucent negative which could be used to print multiple positive copies, the basis of most chemical photography up to the present day. Daguerreotypes could only be replicated by rephotographing them with a camera.[21] Talbot's famous tiny paper negative of the Oriel window in Lacock Abbey, one of a number of camera photographs he made in the summer of 1835, may be the oldest camera negative in existence.[22][23]

  

John Herschel made many contributions to the new field. He invented the cyanotype process, later familiar as the "blueprint". He was the first to use the terms "photography", "negative" and "positive". He had discovered in 1819 that sodium thiosulphate was a solvent of silver halides, and in 1839 he informed Talbot (and, indirectly, Daguerre) that it could be used to "fix" silver-halide-based photographs and make them completely light-fast. He made the first glass negative in late 1839.

  

In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper.

  

Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1884, George Eastman developed an early type of film to replace photographic plates, leading to the technology used by film cameras today.

  

In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize for Physics in 1908.

  

Black-and-white

See also: Monochrome photography

All photography was originally monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost and its "classic" photographic look. The tones and contrast between light and dark shadows define black and white photography.[24] It is important to note that some monochromatic pictures are not always pure blacks and whites, but also contain other hues depending on the process. The cyanotype process produces an image composed of blue tones. The albumen process, first used more than 150 years ago, produces brown tones.

  

Many photographers continue to produce some monochrome images, often because of the established archival permanence of well processed silver halide based materials. Some full color digital images are processed using a variety of techniques to create black and whites, and some manufacturers produce digital cameras that exclusively shoot monochrome.

  

Color

Color photography was explored beginning in the mid-19th century. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light.

  

The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by physicist James Clerk Maxwell in 1855. Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image.

  

Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s.

  

Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images.

  

The development of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability.

  

Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s.

  

Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multilayer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure.

  

Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently available color films still employ a multilayer emulsion and the same principles, most closely resembling Agfa's product.

  

Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963.

  

Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment.

  

Digital photography

Main article: Digital photography

See also: Digital camera and Digital versus film photography

In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born.

  

Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. [25] An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications.

  

Photography gained the interest of many scientists and artists from its inception. Scientists have used photography to record and study movements, such as Eadweard Muybridge's study of human and animal locomotion in 1887. Artists are equally interested by these aspects but also try to explore avenues other than the photo-mechanical representation of reality, such as the pictorialist movement.

  

Military, police, and security forces use photography for surveillance, recognition and data storage. Photography is used by amateurs to preserve memories, to capture special moments, to tell stories, to send messages, and as a source of entertainment. High speed photography allows for visualizing events that are too fast for the human eye.

  

Technical aspects

Main article: Camera

The camera is the image-forming device, and photographic film or a silicon electronic image sensor is the sensing medium. The respective recording medium can be the film itself, or a digital electronic or magnetic memory.[26]

  

Photographers control the camera and lens to "expose" the light recording material (such as film) to the required amount of light to form a "latent image" (on film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on paper or film.

  

The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera).

  

As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens.

  

The movie camera is a type of photographic camera which takes a rapid sequence of photographs on strips of film. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person's eyes and brain merge the separate pictures together to create the illusion of motion.[27]

  

Camera controls are interrelated. The total amount of light reaching the film plane (the 'exposure') changes with the duration of exposure, aperture of the lens, and on the effective focal length of the lens (which in variable focal length lenses, can force a change in aperture as the lens is zoomed). Changing any of these controls can alter the exposure. Many cameras may be set to adjust most or all of these controls automatically. This automatic functionality is useful for occasional photographers in many situations.

  

The duration of an exposure is referred to as shutter speed, often even in cameras that do not have a physical shutter, and is typically measured in fractions of a second. It is quite possible to have exposures from one up to several seconds, usually for still-life subjects, and for night scenes exposure times can be several hours. However, for a subject that is in motion use a fast shutter speed. This will prevent the photograph from coming out blurry.[29]

  

The effective aperture is expressed by an f-number or f-stop (derived from focal ratio), which is proportional to the ratio of the focal length to the diameter of the aperture. Longer lenses will pass less light even though the diameter of the aperture is the same due to the greater distance the light has to travel; shorter lenses (a shorter focal length) will be brighter with the same size of aperture.

  

The smaller the f/number, the larger the effective aperture. The present system of f/numbers to give the effective aperture of a lens was standardized by an international convention. There were earlier, different series of numbers in older cameras.

  

If the f-number is decreased by a factor of √2, the aperture diameter is increased by the same factor, and its area is increased by a factor of 2. The f-stops that might be found on a typical lens include 2.8, 4, 5.6, 8, 11, 16, 22, 32, where going up "one stop" (using lower f-stop numbers) doubles the amount of light reaching the film, and stopping down one stop halves the amount of light.

  

Image capture can be achieved through various combinations of shutter speed, aperture, and film or sensor speed. Different (but related) settings of aperture and shutter speed enable photographs to be taken under various conditions of film or sensor speed, lighting and motion of subjects and/or camera, and desired depth of field. A slower speed film will exhibit less "grain", and a slower speed setting on an electronic sensor will exhibit less "noise", while higher film and sensor speeds allow for a faster shutter speed, which reduces motion blur or allows the use of a smaller aperture to increase the depth of field.

  

For example, a wider aperture is used for lower light and a lower aperture for more light. If a subject is in motion, then a high shutter speed may be needed. A tripod can also be helpful in that it enables a slower shutter speed to be used.

  

For example, f/8 at 8 ms (1/125 of a second) and f/5.6 at 4 ms (1/250 of a second) yield the same amount of light. The chosen combination has an impact on the final result. The aperture and focal length of the lens determine the depth of field, which refers to the range of distances from the lens that will be in focus. A longer lens or a wider aperture will result in "shallow" depth of field (i.e. only a small plane of the image will be in sharp focus). This is often useful for isolating subjects from backgrounds as in individual portraits or macro photography.

  

Conversely, a shorter lens, or a smaller aperture, will result in more of the image being in focus. This is generally more desirable when photographing landscapes or groups of people. With very small apertures, such as pinholes, a wide range of distance can be brought into focus, but sharpness is severely degraded by diffraction with such small apertures. Generally, the highest degree of "sharpness" is achieved at an aperture near the middle of a lens's range (for example, f/8 for a lens with available apertures of f/2.8 to f/16). However, as lens technology improves, lenses are becoming capable of making increasingly sharp images at wider apertures.

  

Image capture is only part of the image forming process. Regardless of material, some process must be employed to render the latent image captured by the camera into a viewable image. With slide film, the developed film is just mounted for projection. Print film requires the developed film negative to be printed onto photographic paper or transparency. Digital images may be uploaded to an image server (e.g., a photo-sharing web site), viewed on a television, or transferred to a computer or digital photo frame. Every type can be printed on more "classical" mediums such as regular paper or photographic paper for examples.

  

Prior to the rendering of a viewable image, modifications can be made using several controls. Many of these controls are similar to controls during image capture, while some are exclusive to the rendering process. Most printing controls have equivalent digital concepts, but some create different effects. For example, dodging and burning controls are different between digital and film processes. Other printing modifications include:

Digital point-and-shoot cameras have become widespread consumer products, outselling film cameras, and including new features such as video and audio recording. Kodak announced in January 2004 that it would no longer sell reloadable 35 mm cameras in western Europe, Canada and the United States after the end of that year. Kodak was at that time a minor player in the reloadable film cameras market. In January 2006, Nikon followed suit and announced that they will stop the production of all but two models of their film cameras: the low-end Nikon FM10, and the high-end Nikon F6. On 25 May 2006, Canon announced they will stop developing new film SLR cameras.[34] Though most new camera designs are now digital, a new 6x6cm/6x7cm medium format film camera was introduced in 2008 in a cooperation between Fuji and Voigtländer.[35][36]

  

According to a survey made by Kodak in 2007 when the majority of photography was already digital, 75 percent of professional photographers say they will continue to use film, even though some embrace digital.[37]

  

The PMA say that in the year 2000 nearly a billion rolls of film were sold each year and by 2011 a mere 20 million rolls, plus 31 million single-use cameras.[38]

  

Quelle:

en.wikipedia.org/wiki/Photography

de.wikipedia.org/wiki/Fotografie