你为什么背着我爱别人| 苜蓿是什么| 兰芝属于什么档次| 梅核气吃什么药能除根| 润喉咙什么东西最合适| 告诫是什么意思| 尿液发黄是什么病| 咳嗽发烧吃什么药| tg是什么| 吃亏是什么意思| 血小板低有什么危害| 换床有什么讲究| 口腔溃疡吃什么好| 搬新家送什么礼物好| 清宫后需要注意什么| dha不能和什么一起吃| 类风湿关节炎吃什么好| 高考推迟月经吃什么药| 白子画什么时候爱上花千骨的| 为难的难是什么意思| 君臣佐使是什么意思| 智商140是什么水平| 智齿有什么用| 吃什么补心| 傻子是什么意思| 芋头不能和什么一起吃| 女生的胸部长什么样| 一丘之貉是什么意思| 补钙什么时间段最好| 鹅拉绿色粪便是什么病| 黄油是用什么做的| 吃什么化痰| 沧海是什么意思| 忘不了鱼在中国叫什么| 倒牙是什么意思| 腹泻吃什么药见效最快| 淋巴瘤是什么症状| 口角炎缺乏什么维生素| 天然气是什么味道| bpo是什么意思| 血小板太高会导致什么| 子宫增大是什么原因造成的| 牙龈发炎是什么原因| 爱屋及乌什么意思| 发烧能吃什么水果| 下午一点多是什么时辰| 吃栗子有什么好处| 朋友是什么| 乐子是什么意思| 宫颈癌早期什么症状| 孕妇补铁吃什么药| 吃什么对皮肤好| 骨瘤是什么病| 红润润的什么| 为什么会流口水| 鼻子出血是什么原因引起的| 灵柩是什么意思| 切痣挂什么科| 服装属于五行什么行业| 生姜什么时候种| 劝君更尽一杯酒的下一句是什么| 天美时手表什么档次| 做完雾化为什么要漱口| 下午吃什么饭| 绿野仙踪是什么意思| ags是什么意思| 散光是什么原因导致的| 反流性食管炎吃什么中成药最好| 什么的杏花| 驼背是什么原因造成的| 什么屁股摸不得| 肺炎衣原体和支原体有什么区别| 肺的主要功能是什么| 喝水多尿多是什么原因| 2024年属龙的是什么命| 怎么知道自己缺什么五行| 司空见惯的惯是什么意思| 十月二十五是什么星座| 怀孕会出现什么状况| 4.2什么星座| 痔疮不能吃什么| 虱子用什么药可以根除| 电位是什么| 交会是什么意思| 戴的部首是什么| 紫砂壶泡什么茶最好| 猫咪吃什么| 糖耐量异常是什么意思| 喝什么美白| 二氧化碳是什么意思| 一个月一个并念什么| 手心脚心热是什么原因| 猫咪吐黄水有泡沫没有精神吃什么药| 颈椎生理曲度变直是什么意思| 冲浪什么意思| 扁平苔藓是什么病| 万宝龙皮带算什么档次| 慧眼识珠是什么意思| 杨梅有什么功效和作用| 血糖高可以吃什么蔬菜| 早上起来手麻是什么原因| sod是什么| 中古包是什么意思| 包臀裙配什么上衣| 预防肺结核吃什么药| 玄学是什么| 梦见自己相亲是什么意思| 银杏果什么时候成熟| 晓五行属什么| 阴茎是什么| zxj是什么意思| 脚起水泡是什么原因| 621什么星座| 故意不接电话说明什么| 八段锦什么时间练最好| 置换补贴什么意思| 女人怕冷是什么原因| 1941年是什么年| 低血糖吃什么| 头发麻是什么病的前兆| 出汗太多会对身体造成什么伤害| 十二月二十三是什么星座| 流是什么意思| 间质性肺炎是什么意思| 电解工是干什么的| 马蹄铁什么时候发明的| 四月初七是什么星座| 什么叫三观不正| 晚上剪指甲有什么说法| 天蝎座是什么性格| 脚趾甲凹凸不平是什么原因| 蛇盘疮长什么样| 阴道口瘙痒是什么原因| 怀孕前检查什么项目内容| 石斛花有什么功效| levi是什么意思| 吃什么紧致皮肤抗衰老| 七月一号什么星座| 颈动脉挂什么科| 尿管型偏高是什么原因| 暖宫贴贴在什么位置| 股癣用什么药膏好得快| 覆盆子是什么东西| 急性寻麻疹用什么药| 子宫内膜ca是什么意思| inr医学上是什么意思| 小孩睡觉打呼噜是什么原因| 眼袋是什么原因引起的| 乙肝两对半25阳性是什么意思| 60是什么意思| 举措是什么意思| 经血逆流的症状是什么| 舌头发白有齿痕是什么原因| 腋下有疙瘩是什么原因| 干咳嗽喉咙痒是什么原因| 月完念什么| 高血压有什么症状表现| 深圳市市长是什么级别| 脑内腔隙灶是什么意思| 潜血阳性是什么意思| 开塞露是干什么用的| 升读什么字| 昏睡是什么症状| 蝙蝠吃什么食物| 脑供血不足吃什么药效果好| 袁崇焕为什么被杀| 织女是什么意思| 埃及的母亲河是什么| 喉咙干痒是什么原因| 大象又什么又什么| 梅花什么季节开| 多喝白开水有什么好处| 孕妇羊水少吃什么补的快| 做试管前需要检查什么项目| 玻璃五行属什么| 维民所止什么意思| 喝枸杞子泡水有什么好处和坏处| 鼻塞用什么药| 双侧下鼻甲肥大是什么意思| pt是什么元素| 吃了什么药不能喝酒| 中药液是什么药| 喜大普奔是什么意思| 6.25是什么日子| 阿根廷讲什么语言| 胖脸女人适合什么发型| 低血压高吃什么药好| 12306什么时候放票| 流口水是什么病的前兆| 龙生九子下一句是什么| 嗳气吃什么药| 胃粘膜糜烂吃什么药| 10000是什么电话| 多吃醋有什么好处和坏处| 抢救失血伤员时要先采取什么措施| 减肥晚饭吃什么好| 老树盘根是什么意思| 骨折吃什么| 怀孕有褐色分泌物是什么原因| 天上的云朵像什么| 睾丸炎用什么药| 葡萄的茎属于什么茎| 白玫瑰代表什么| 文盲是什么意思| 董字五行属什么| 孕中期同房要注意什么| 血小板少是什么病| 皮包公司是什么意思| 黑色素沉淀是什么原因引起的| 癣是什么原因引起的| 世界上最大的东西是什么| 抵抗力差是什么原因| 谌读什么| 青是什么颜色| 狗鼻子干是什么原因| 蟑螂喜欢什么环境| 盐城有什么特产| 甘肃是什么省| 女红是什么意思| 螃蟹代表什么生肖| 羊水污染是什么原因造成的| 苦菜是什么菜| 动脉硬化是什么症状| 指甲开裂是什么原因| 小孩呕吐是什么原因| 爆表是什么意思| 线性骨折是什么意思| 中医考证需要什么条件| 企鹅是什么意思| 干咳无痰是什么原因| 猪肚炖什么好吃| 什么是引流| 多囊肾是什么意思| 为什么会掉头发| 卡布奇诺是什么意思| 什么的足迹| 泡脚什么时候泡最好| 提炼是什么意思| 透析什么意思| 上颌窦囊肿是什么意思| 家五行属性是什么| 郭麒麟什么学历| 经心的近义词是什么| 梦到和死人说话是什么意思| 磁共振是检查什么| 气管炎咳嗽吃什么药最有效| 吃猪肝有什么好处和坏处| 未见血流信号是什么意思| 百忧解是什么药| 来年是什么意思| 什么样的人容易垂体瘤| 请丧假需要什么证明| 梦到蜘蛛是什么意思| 晓五行属什么| 胎盘吃了对身体有什么好处| 1991年属羊是什么命| 一班三检是指什么| 王字旁一个玉读什么| 刘备是什么样的人| 鸡的守护神是什么菩萨| 荷花象征什么| 长乘宽乘高算的是什么| 什么呢| 唇炎看什么科室| 百度Jump to content

入华四年,微软云准备好了打价格战

From Wikipedia, the free encyclopedia
百度 无论是假精神病,还是被精神病,都是强制医疗制度实施中的不堪乱象。

In compiler design, static single assignment form (often abbreviated as SSA form or simply SSA) is a type of intermediate representation (IR) where each variable is assigned exactly once. SSA is used in most high-quality optimizing compilers for imperative languages, including LLVM, the GNU Compiler Collection, and many commercial compilers.

There are efficient algorithms for converting programs into SSA form. To convert to SSA, existing variables in the original IR are split into versions, new variables typically indicated by the original name with a subscript, so that every definition gets its own version. Additional statements that assign to new versions of variables may also need to be introduced at the join point of two control flow paths. Converting from SSA form to machine code is also efficient.

SSA makes numerous analyses needed for optimizations easier to perform, such as determining use-define chains, because when looking at a use of a variable there is only one place where that variable may have received a value. Most optimizations can be adapted to preserve SSA form, so that one optimization can be performed after another with no additional analysis. The SSA based optimizations are usually more efficient and more powerful than their non-SSA form prior equivalents.

In functional language compilers, such as those for Scheme and ML, continuation-passing style (CPS) is generally used. SSA is formally equivalent to a well-behaved subset of CPS excluding non-local control flow, so optimizations and transformations formulated in terms of one generally apply to the other. Using CPS as the intermediate representation is more natural for higher-order functions and interprocedural analysis. CPS also easily encodes call/cc, whereas SSA does not.[1]

History

[edit]

SSA was developed in the 1980s by several researchers at IBM. Kenneth Zadeck, a key member of the team, moved to Brown University as development continued.[2][3] A 1986 paper introduced birthpoints, identity assignments, and variable renaming such that variables had a single static assignment.[4] A subsequent 1987 paper by Jeanne Ferrante and Ronald Cytron[5] proved that the renaming done in the previous paper removes all false dependencies for scalars.[3] In 1988, Barry Rosen, Mark N. Wegman, and Kenneth Zadeck replaced the identity assignments with Φ-functions, introduced the name "static single-assignment form", and demonstrated a now-common SSA optimization.[6] The name Φ-function was chosen by Rosen to be a more publishable version of "phony function".[3] Alpern, Wegman, and Zadeck presented another optimization, but using the name "static single assignment".[7] Finally, in 1989, Rosen, Wegman, Zadeck, Cytron, and Ferrante found an efficient means of converting programs to SSA form.[8]

Benefits

[edit]

The primary usefulness of SSA comes from how it simultaneously simplifies and improves the results of a variety of compiler optimizations, by simplifying the properties of variables. For example, consider this piece of code:

y := 1
y := 2
x := y

Humans can see that the first assignment is not necessary, and that the value of y being used in the third line comes from the second assignment of y. A program would have to perform reaching definition analysis to determine this. But if the program is in SSA form, both of these are immediate:

y1 := 1
y2 := 2
x1 := y2

Compiler optimization algorithms that are either enabled or strongly enhanced by the use of SSA include:

  • Constant folding – conversion of computations from runtime to compile time, e.g. treat the instruction a=3*4+5; as if it were a=17;
  • Value range propagation[9] – precompute the potential ranges a calculation could be, allowing for the creation of branch predictions in advance
  • Sparse conditional constant propagation – range-check some values, allowing tests to predict the most likely branch
  • Dead-code elimination – remove code that will have no effect on the results
  • Global value numbering – replace duplicate calculations producing the same result
  • Partial-redundancy elimination – removing duplicate calculations previously performed in some branches of the program
  • Strength reduction – replacing expensive operations by less expensive but equivalent ones, e.g. replace integer multiply or divide by powers of 2 with the potentially less expensive shift left (for multiply) or shift right (for divide).
  • Register allocation – optimize how the limited number of machine registers may be used for calculations

Converting to SSA

[edit]

Converting ordinary code into SSA form is primarily a matter of replacing the target of each assignment with a new variable, and replacing each use of a variable with the "version" of the variable reaching that point. For example, consider the following control-flow graph:

An example control-flow graph, before conversion to SSA
An example control-flow graph, before conversion to SSA

Changing the name on the left hand side of "x x - 3" and changing the following uses of x to that new name would leave the program unaltered. This can be exploited in SSA by creating two new variables: x1 and x2, each of which is assigned only once. Likewise, giving distinguishing subscripts to all the other variables yields:

An example control-flow graph, partially converted to SSA
An example control-flow graph, partially converted to SSA

It is clear which definition each use is referring to, except for one case: both uses of y in the bottom block could be referring to either y1 or y2, depending on which path the control flow took.

To resolve this, a special statement is inserted in the last block, called a Φ (Phi) function. This statement will generate a new definition of y called y3 by "choosing" either y1 or y2, depending on the control flow in the past.

An example control-flow graph, fully converted to SSA
An example control-flow graph, fully converted to SSA

Now, the last block can simply use y3, and the correct value will be obtained either way. A Φ function for x is not needed: only one version of x, namely x2 is reaching this place, so there is no problem (in other words, Φ(x2,x2)=x2).

Given an arbitrary control-flow graph, it can be difficult to tell where to insert Φ functions, and for which variables. This general question has an efficient solution that can be computed using a concept called dominance frontiers (see below).

Φ functions are not implemented as machine operations on most machines. A compiler can implement a Φ function by inserting "move" operations at the end of every predecessor block. In the example above, the compiler might insert a move from y1 to y3 at the end of the middle-left block and a move from y2 to y3 at the end of the middle-right block. These move operations might not end up in the final code based on the compiler's register allocation procedure. However, this approach may not work when simultaneous operations are speculatively producing inputs to a Φ function, as can happen on wide-issue machines. Typically, a wide-issue machine has a selection instruction used in such situations by the compiler to implement the Φ function.

Computing minimal SSA using dominance frontiers

[edit]

In a control-flow graph, a node A is said to strictly dominate a different node B if it is impossible to reach B without passing through A first. In other words, if node B is reached, then it can be assumed that A has run. A is said to dominate B (or B to be dominated by A) if either A strictly dominates B or A = B.

A node which transfers control to a node A is called an immediate predecessor of A.

The dominance frontier of node A is the set of nodes B where A does not strictly dominate B, but does dominate some immediate predecessor of B. These are the points at which multiple control paths merge back together into a single path.

For example, in the following code:

[1] x = random()
if x < 0.5
    [2] result = "heads"
else
    [3] result = "tails"
end
[4] print(result)

Node 1 strictly dominates 2, 3, and 4 and the immediate predecessors of node 4 are nodes 2 and 3.

Dominance frontiers define the points at which Φ functions are needed. In the above example, when control is passed to node 4, the definition of result used depends on whether control was passed from node 2 or 3. Φ functions are not needed for variables defined in a dominator, as there is only one possible definition that can apply.

There is an efficient algorithm for finding dominance frontiers of each node. This algorithm was originally described in "Efficiently Computing Static Single Assignment Form and the Control Graph" by Ron Cytron, Jeanne Ferrante, et al. in 1991.[10]

Keith D. Cooper, Timothy J. Harvey, and Ken Kennedy of Rice University describe an algorithm in their paper titled A Simple, Fast Dominance Algorithm:[11]

for each node b
    dominance_frontier(b) := {}
for each node b
    if the number of immediate predecessors of b ≥ 2
        for each p in immediate predecessors of b
            runner := p
            while runner ≠ idom(b)
                dominance_frontier(runner) := dominance_frontier(runner) ∪ { b }
                runner := idom(runner)

In the code above, idom(b) is the immediate dominator of b, the unique node that strictly dominates b but does not strictly dominate any other node that strictly dominates b.

Variations that reduce the number of Φ functions

[edit]

"Minimal" SSA inserts the minimal number of Φ functions required to ensure that each name is assigned a value exactly once and that each reference (use) of a name in the original program can still refer to a unique name. (The latter requirement is needed to ensure that the compiler can write down a name for each operand in each operation.)

However, some of these Φ functions could be dead. For this reason, minimal SSA does not necessarily produce the fewest Φ functions that are needed by a specific procedure. For some types of analysis, these Φ functions are superfluous and can cause the analysis to run less efficiently.

Pruned SSA

[edit]

Pruned SSA form is based on a simple observation: Φ functions are only needed for variables that are "live" after the Φ function. (Here, "live" means that the value is used along some path that begins at the Φ function in question.) If a variable is not live, the result of the Φ function cannot be used and the assignment by the Φ function is dead.

Construction of pruned SSA form uses live-variable information in the Φ function insertion phase to decide whether a given Φ function is needed. If the original variable name isn't live at the Φ function insertion point, the Φ function isn't inserted.

Another possibility is to treat pruning as a dead-code elimination problem. Then, a Φ function is live only if any use in the input program will be rewritten to it, or if it will be used as an argument in another Φ function. When entering SSA form, each use is rewritten to the nearest definition that dominates it. A Φ function will then be considered live as long as it is the nearest definition that dominates at least one use, or at least one argument of a live Φ.

Semi-pruned SSA

[edit]

Semi-pruned SSA form[12] is an attempt to reduce the number of Φ functions without incurring the relatively high cost of computing live-variable information. It is based on the following observation: if a variable is never live upon entry into a basic block, it never needs a Φ function. During SSA construction, Φ functions for any "block-local" variables are omitted.

Computing the set of block-local variables is a simpler and faster procedure than full live-variable analysis, making semi-pruned SSA form more efficient to compute than pruned SSA form. On the other hand, semi-pruned SSA form will contain more Φ functions.

Block arguments

[edit]

Block arguments are an alternative to Φ functions that is representationally identical but in practice can be more convenient during optimization. Blocks are named and take a list of block arguments, notated as function parameters. When calling a block the block arguments are bound to specified values. MLton, Swift SIL, and LLVM MLIR use block arguments.[13]

Converting out of SSA form

[edit]

SSA form is not normally used for direct execution (although it is possible to interpret SSA[14]), and it is frequently used "on top of" another IR with which it remains in direct correspondence. This can be accomplished by "constructing" SSA as a set of functions that map between parts of the existing IR (basic blocks, instructions, operands, etc.) and its SSA counterpart. When the SSA form is no longer needed, these mapping functions may be discarded, leaving only the now-optimized IR.

Performing optimizations on SSA form usually leads to entangled SSA-Webs, meaning there are Φ instructions whose operands do not all have the same root operand. In such cases color-out algorithms are used to come out of SSA. Naive algorithms introduce a copy along each predecessor path that caused a source of different root symbol to be put in Φ than the destination of Φ. There are multiple algorithms for coming out of SSA with fewer copies, most use interference graphs or some approximation of it to do copy coalescing.[15]

Extensions

[edit]

Extensions to SSA form can be divided into two categories.

Renaming scheme extensions alter the renaming criterion. Recall that SSA form renames each variable when it is assigned a value. Alternative schemes include static single use form (which renames each variable at each statement when it is used) and static single information form (which renames each variable when it is assigned a value, and at the post-dominance frontier).

Feature-specific extensions retain the single assignment property for variables, but incorporate new semantics to model additional features. Some feature-specific extensions model high-level programming language features like arrays, objects and aliased pointers. Other feature-specific extensions model low-level architectural features like speculation and predication.

Compilers using SSA form

[edit]

Open-source

[edit]
  • Mono uses SSA in its JIT compiler called Mini
  • WebKit uses SSA in its JIT compilers.[16][17]
  • Swift defines its own SSA form above LLVM IR, called SIL (Swift Intermediate Language).[18][19]
  • The Erlang compiler was rewritten in OTP 22.0 to "internally use an intermediate representation based on Static Single Assignment (SSA)", with plans for further optimizations built on top of SSA in future releases.[20]
  • The LLVM Compiler Infrastructure uses SSA form for all scalar register values (everything except memory) in its primary code representation. SSA form is only eliminated once register allocation occurs, late in the compile process (often at link time).
  • The GNU Compiler Collection (GCC) makes extensive use of SSA since version 4 (released in April 2005). The frontends generate "GENERIC" code that is then converted into "GIMPLE" code by the "gimplifier". High-level optimizations are then applied on the SSA form of "GIMPLE". The resulting optimized intermediate code is then translated into RTL, on which low-level optimizations are applied. The architecture-specific backends finally turn RTL into assembly language.
  • Go (1.7: for x86-64 architecture only; 1.8: for all supported architectures).[21][22]
  • IBM's open source adaptive Java virtual machine, Jikes RVM, uses extended Array SSA, an extension of SSA that allows analysis of scalars, arrays, and object fields in a unified framework. Extended Array SSA analysis is only enabled at the maximum optimization level, which is applied to the most frequently executed portions of code.
  • The Mozilla Firefox SpiderMonkey JavaScript engine uses SSA-based IR.[23]
  • The Chromium V8 JavaScript engine implements SSA in its Crankshaft compiler infrastructure as announced in December 2010
  • PyPy uses a linear SSA representation for traces in its JIT compiler.
  • The Android Runtime[24] and the Dalvik Virtual Machine use SSA.[25]
  • The Standard ML compiler MLton uses SSA in one of its intermediate languages.
  • LuaJIT makes heavy use of SSA-based optimizations.[26]
  • The PHP and Hack compiler HHVM uses SSA in its IR.[27]
  • The OCaml compiler uses SSA in its CMM IR (which stands for C--).[28]
  • libFirm, a library for use as the middle and back ends of a compiler, uses SSA form for all scalar register values until code generation by use of an SSA-aware register allocator.[29]
  • Various Mesa drivers via NIR, an SSA representation for shading languages.[30]

Commercial

[edit]

Research and abandoned

[edit]
  • The ETH Oberon-2 compiler was one of the first public projects to incorporate "GSA", a variant of SSA.
  • The Open64 compiler used SSA form in its global scalar optimizer, though the code is brought into SSA form before and taken out of SSA form afterwards. Open64 uses extensions to SSA form to represent memory in SSA form as well as scalar values.
  • In 2002, researchers modified IBM's JikesRVM (named Jalape?o at the time) to run both standard Java bytecode and a typesafe SSA (SafeTSA) bytecode class files, and demonstrated significant performance benefits to using the SSA bytecode.
  • jackcc is an open-source compiler for the academic instruction set Jackal 3.0. It uses a simple 3-operand code with SSA for its intermediate representation. As an interesting variant, it replaces Φ functions with a so-called SAME instruction, which instructs the register allocator to place the two live ranges into the same physical register.
  • The Illinois Concert Compiler circa 1994[36] used a variant of SSA called SSU (Static Single Use) which renames each variable when it is assigned a value, and in each conditional context in which that variable is used; essentially the static single information form mentioned above. The SSU form is documented in John Plevyak's Ph.D Thesis.
  • The COINS compiler uses SSA form optimizations as explained here.
  • Reservoir Labs' R-Stream compiler supports non-SSA (quad list), SSA and SSI (Static Single Information[37]) forms.[38]
  • Although not a compiler, the Boomerang decompiler uses SSA form in its internal representation. SSA is used to simplify expression propagation, identifying parameters and returns, preservation analysis, and more.
  • DotGNU Portable.NET used SSA in its JIT compiler.

References

[edit]

Notes

[edit]
  1. ^ Kelsey, Richard A. (1995). "A correspondence between continuation passing style and static single assignment form" (PDF). Papers from the 1995 ACM SIGPLAN workshop on Intermediate representations. pp. 13–22. doi:10.1145/202529.202532. ISBN 0897917545. S2CID 6207179.
  2. ^ Rastello & Tichadou 2022, sec. 1.4.
  3. ^ a b c Zadeck, Kenneth (April 2009). The Development of Static Single Assignment Form (PDF). Static Single-Assignment Form Seminar. Autrans, France.
  4. ^ Cytron, Ron; Lowry, Andy; Zadeck, F. Kenneth (1986). "Code motion of control structures in high-level languages". Proceedings of the 13th ACM SIGACT-SIGPLAN symposium on Principles of programming languages - POPL '86. pp. 70–85. doi:10.1145/512644.512651. S2CID 9099471.
  5. ^ Cytron, Ronald Kaplan; Ferrante, Jeanne. What's in a name? Or, the value of renaming for parallelism detection and storage allocation. International Conference on Parallel Processing, ICPP'87 1987. pp. 19–27.
  6. ^ Barry Rosen; Mark N. Wegman; F. Kenneth Zadeck (1988). "Global value numbers and redundant computations" (PDF). Proceedings of the 15th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '88. pp. 12–27. doi:10.1145/73560.73562. ISBN 0-89791-252-7.
  7. ^ Alpern, B.; Wegman, M. N.; Zadeck, F. K. (1988). "Detecting equality of variables in programs". Proceedings of the 15th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '88. pp. 1–11. doi:10.1145/73560.73561. ISBN 0897912527. S2CID 18384941.
  8. ^ Cytron, Ron; Ferrante, Jeanne; Rosen, Barry K.; Wegman, Mark N. & Zadeck, F. Kenneth (1991). "Efficiently computing static single assignment form and the control dependence graph" (PDF). ACM Transactions on Programming Languages and Systems. 13 (4): 451–490. CiteSeerX 10.1.1.100.6361. doi:10.1145/115372.115320. S2CID 13243943.
  9. ^ value range propagation
  10. ^ Cytron, Ron; Ferrante, Jeanne; Rosen, Barry K.; Wegman, Mark N.; Zadeck, F. Kenneth (1 October 1991). "Efficiently computing static single assignment form and the control dependence graph". ACM Transactions on Programming Languages and Systems. 13 (4): 451–490. doi:10.1145/115372.115320. S2CID 13243943.
  11. ^ Cooper, Keith D.; Harvey, Timothy J.; Kennedy, Ken (2001). A Simple, Fast Dominance Algorithm (PDF) (Technical report). Rice University, CS Technical Report 06-33870. Archived from the original (PDF) on 2025-08-06.
  12. ^ Briggs, Preston; Cooper, Keith D.; Harvey, Timothy J.; Simpson, L. Taylor (1998). Practical Improvements to the Construction and Destruction of Static Single Assignment Form (PDF) (Technical report). Archived from the original (PDF) on 2025-08-06.
  13. ^ "Block Arguments vs PHI nodes - MLIR Rationale". mlir.llvm.org. Retrieved 4 March 2022.
  14. ^ von Ronne, Jeffery; Ning Wang; Michael Franz (2004). "Interpreting programs in static single assignment form". Proceedings of the 2004 workshop on Interpreters, virtual machines and emulators - IVME '04. p. 23. doi:10.1145/1059579.1059585. ISBN 1581139098. S2CID 451410.
  15. ^ Boissinot, Benoit; Darte, Alain; Rastello, Fabrice; Dinechin, Beno?t Dupont de; Guillon, Christophe (2008). "Revisiting Out-of-SSA Translation for Correctness, Code Quality, and Efficiency". HAL-Inria Cs.DS: 14.
  16. ^ "Introducing the WebKit FTL JIT". 13 May 2014.
  17. ^ "Introducing the B3 JIT Compiler". 15 February 2016.
  18. ^ "Swift Intermediate Language (GitHub)". GitHub. 30 October 2021.
  19. ^ "Swift's High-Level IR: A Case Study of Complementing LLVM IR with Language-Specific Optimization, LLVM Developers Meetup 10/2015". YouTube. 9 November 2015. Archived from the original on 2025-08-06.
  20. ^ "OTP 22.0 Release Notes".
  21. ^ "Go 1.7 Release Notes - The Go Programming Language". golang.org. Retrieved 2025-08-06.
  22. ^ "Go 1.8 Release Notes - The Go Programming Language". golang.org. Retrieved 2025-08-06.
  23. ^ "IonMonkey Overview".,
  24. ^ The Evolution of ART - Google I/O 2016. Google. 25 May 2016. Event occurs at 3m47s.
  25. ^ Ramanan, Neeraja (12 Dec 2011). "JIT through the ages" (PDF).
  26. ^ "Bytecode Optimizations". the LuaJIT project.
  27. ^ "HipHop Intermediate Representation (HHIR)". GitHub. 30 October 2021.
  28. ^ Chambart, Pierre; Laviron, Vincent; Pinto, Dario (2025-08-06). "Behind the Scenes of the OCaml Optimising Compiler". OCaml Pro.
  29. ^ "Firm - Optimization and Machine Code Generation".
  30. ^ Ekstrand, Jason (16 December 2014). "Reintroducing NIR, a new IR for mesa".
  31. ^ "The Java HotSpot Performance Engine Architecture". Oracle Corporation.
  32. ^ "Introducing a new, advanced Visual C++ code optimizer". 4 May 2016.
  33. ^ "SPIR-V spec" (PDF).
  34. ^ Sarkar, V. (May 1997). "Automatic selection of high-order transformations in the IBM XL FORTRAN compilers" (PDF). IBM Journal of Research and Development. 41 (3). IBM: 233–264. doi:10.1147/rd.413.0233.
  35. ^ Chakrabarti, Gautam; Grover, Vinod; Aarts, Bastiaan; Kong, Xiangyun; Kudlur, Manjunath; Lin, Yuan; Marathe, Jaydeep; Murphy, Mike; Wang, Jian-Zhong (2012). "CUDA: Compiling and optimizing for a GPU platform". Procedia Computer Science. 9: 1910–1919. doi:10.1016/j.procs.2012.04.209.
  36. ^ "Illinois Concert Project". Archived from the original on 2025-08-06.
  37. ^ Ananian, C. Scott; Rinard, Martin (1999). Static Single Information Form (PDF) (Technical report). CiteSeerX 10.1.1.1.9976.
  38. ^ Encyclopedia of Parallel Computing.

General references

[edit]
[edit]
做nt挂什么科 秘书是干什么的 吃什么食物降血压最快最好 芒果和什么相克 唇周发黑是什么原因
乳房胀痛是什么原因引起的 黄体破裂吃什么药 手掌发麻是什么原因 肠粉是用什么材料做的 吃什么补白细胞快
双脚浮肿是什么原因 中校相当于政府什么官 为什么会有湿气 蛋蛋疼是什么原因 脚心发凉是什么原因
扬是什么生肖 产后第一次来月经是什么颜色 铁蛋白低是什么意思 紫菜和海带有什么区别 金多水浊什么意思
四月二十是什么星座hcv7jop9ns5r.cn 小孩上户口需要什么材料hcv9jop0ns2r.cn 第二视角是什么意思hcv8jop8ns7r.cn 八月节是什么节hcv9jop4ns2r.cn 腹部b超能检查出什么hcv9jop0ns9r.cn
身败名裂是什么意思hcv8jop9ns7r.cn 甲状腺肿物是什么意思onlinewuye.com 屁多肚子胀是什么原因hcv9jop8ns1r.cn 直肠肿瘤不能吃什么hcv8jop6ns9r.cn uc是什么hcv9jop0ns5r.cn
风疹病毒抗体阳性是什么意思xinmaowt.com 山西为什么叫山西hcv8jop4ns2r.cn 霍金什么时候去世的hcv9jop2ns3r.cn 项羽为什么不杀项伯hcv9jop5ns1r.cn 小确幸是什么意思hcv7jop9ns0r.cn
甲状腺检查挂什么科hcv8jop8ns4r.cn 格林巴利综合症是什么病hcv7jop6ns5r.cn 天明是什么意思520myf.com 什么什么若狂bfb118.com 喝什么能变白1949doufunao.com
百度