Talk:Phasor

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Phasor addition[edit]

The amplitude of the phasor addition result should be stated unambiguously - that is the positive square-root of the right-hand side. And the phase-shift of the phasor addition result should not make use of arctan but of atan2 to get the quadrant right:


Untitled[edit]

Phasors are used for a lot more than just AC circuit analysis. I think actually the exact definition may differ depending on the field you are worrying about, such as the Signal Processing, where phasors most definitely do have a frequency. This is probably the best general definition of this mathematical tool: [1]

i wrote this page to fill a link before i had a particularlly solid understanding of phasors myself. At some point i need to rewrite this article but thats quite a bit of work. Plugwash 13:52, 17 Apr 2005 (UTC)

Phasor multiplication[edit]

Is this ever used in regular contexts besides power calculations? I imagine it is. - Omegatron 21:23, August 15, 2005 (UTC)

Yeah, now that I think of it, I'm pretty sure I remember rotating phasor diagrams in communications classes for modulation stuff. - Omegatron 21:27, August 15, 2005 (UTC)


Here is the rule:

Doesn't that imply:

which is not true. What am I missing?

--Bob K 09:38, 21 August 2007 (UTC)[reply]

Bo

b, consider that a phasor is just the amplitude of the frequency domain representation of the sinusoid. Multiplying two phasors is essentially multiplication in the frequency domain which results in convolution, not multiplication, in the time domain. That's why we say that 'the product (or ratio) of two phasors is not itself a phasor'. Alfred Centauri 13:01, 21 August 2007 (UTC)[reply]


Thanks, but the same source provides this definition:

which simply means to me that is a shorthand notation. The "equals" sign should mean we can directly substitute the full mathematical notation for the shorthand:

    is just a shorthand for    

Similarly,

    means    

I'm just doing the obvious application of the definition. If you still disagree, I would appreciate it if someone would show a trigonometric example (without phasors) of multiplication, and then illustrate how phasors simplify the math and arrive at the same answer, as the article implies, or did before I removed this statement:

"Noting that a trigonometric function can be represented as the real component of a complex quantity, it is efficacious to perform the required mathematical operations upon the complex quantity and, at the very end, take its real component to produce the desired answer."


FWIW, what we seem to be talking about here is the analytic representation of a signal, . The analytic representations of    and    are    and    And the product of the analytic signals is    which represents the real signal:    which is not the product of    and    Therefore we have to be careful not to mislead people (and ourselves) about the multiplication of analytic signals. And I think the same goes for phasors. We really need an example of the multiplication property in action. When and why would someone want to use it?

--Bob K 14:44, 21 August 2007 (UTC)[reply]

Yeah, I just took a look at the source link and it's quite confused. As you are probably aware, using phasors in AC circuit analysis amounts to 'pretending' that the excitations are of the form (which is consistent with your 'FWIW' statements). Under this pretense, the ratio of the voltage and current associated with a circuit element is a constant complex number (no time dependence):
But, this is just the impedance of the circuit element and is the result we get by taking the ratio of the phasor voltage and current associated with the circuit element. Clearly, the impedance is not associated with a time function and thus, is not a phasor.
The product of the voltage and the complex conjugate current associated with a circuit element is also a constant complex number:
But, this is just the complex power associated with the circuit element and is the result we get by multiplying the phasor voltage and conjugate phasor current associated with a circuit element. And, as with impedance, the complex power is not associated with a time function and thus, is not a phasor. Alfred Centauri 15:22, 21 August 2007 (UTC)[reply]


If http://en.wikibooks.org/wiki/Circuit_Theory/Phasor_Arithmetic is "confused", then I am inclined to remove the link to it, because I don't know how to fix it. But that link was intended to clarify this statement:

"As with other complex quantities the exponential (polar) form,  A•ej θ, simplifies multiplication and division, while the Cartesian (rectangular) form,  A•cos(θ) + jA•sin(θ), simplifies addition and subtraction."

After reading what you said, I think what's missing is a statement that multiplication between a phasor and a complex impedance is another phasor. But multiplication of two phasors (or squaring one) does not produce another phasor.


While we're on a roll, what do you think of this excerpt:

"...the complex potential in such fields as electromagnetic theory, where—instead of manipulating a real quantity, u—it is often more convenient to derive its harmonic conjugate, v, and then operate upon the complex quantity u + jv, again recovering the real component of the complex "result" as the last stage of computation to generate the true result."
  • Is it useful/helpful?
  • Does it contribute to the understanding of "phasor"?
  • Does the term "operate on" need clarification? E.g., is multiplication restricted to just one "complex potential" and something passive?

--Bob K 16:34, 21 August 2007 (UTC)[reply]

"Is it useful/helpful?" IMHO, No.
"Does it contribute to the understanding of "phasor"? I think I answered that already ;<)
"Does the term "operate on" need clarification?" Not after you've deleted that material.
You know, the quote above regarding complex math isn't necessary either. I understand that everybody wants to contribute something to Wikipedia but, as a result, there's a lot of excess verbosity in articles where a wikilink would suffice. What do you think? Alfred Centauri 21:21, 21 August 2007 (UTC)[reply]

Good idea! I also moved the trig stuff to a more appropriate article.

--Bob K 21:54, 21 August 2007 (UTC)[reply]

DC is sinusoid of 0 frequency[edit]

I removed this statement from the Circuit Laws section for the reasons that it isn't quite correct and isn't needed anyhow to justify the use of phasors. The problem is that phasors are complex numbers in general. DC circuits do not have complex voltages or currents. So, while phasors generalize DC circuit analysis to AC circuits, we can't really go back the other way unless we want to admit complex DC sources. Alfred Centauri 01:32, 27 February 2006 (UTC)[reply]

Well yes but the impedance of an inductor and a capacitor go to 0 and infinity respectively at dc. So provided there are no complex dc sources in the circuit there will be no complex voltages or currents in the circuit. Plugwash 19:40, 28 March 2006 (UTC)[reply]

But my point is precisely that equating DC with a zero frequency sinusoid, as you have done above, is not quite correct. Consider the following AC circuit:

A voltage source of 1 + j0 Vrms in series with a 1 ohm resistor and a current source of 0 - j1 Arms. The average power associated with the resistor is 1W and is independent of frequency, right? But wait; recall that the time domain voltage source and current source functions are given by:

Setting the frequency to zero we get:

With a 'DC' current of 0A, the power associated with the resistor is 0W but this result conflicts with the result above. Clearly, in the context of AC circuit analysis, it is not quite correct to say that DC is just zero frequency. Alfred Centauri 22:25, 28 March 2006 (UTC)[reply]

Here's something else to consider. The rms voltage of a DC source is simply the DC voltage value. The rms voltage of an AC voltage source is the peak voltage over the square root of 2. Since this result is independent of frequency, it seems reasonable to believe that the rms voltage of a zero frequency cosine is equal to the rms value for any non-zero frequency cosine. However, if we insert a frequency of zero into a cosine, the result is the constant peak value, not the constant rms value. Once again, it does not appear to be entirely correct to say that a DC source is a sinusoidal source of zero frequency. Alfred Centauri 23:36, 28 March 2006 (UTC)[reply]

ahh yes but you introduced a complex source, if all dc sources are real then all dc voltages and currents must also be real and the power calculations work fine. Plugwash 01:33, 29 March 2006 (UTC)[reply]

Not true! Look at the expression for i_s(t). That is a real source my friend regardless of frequency. It is the phasor representation of this source that is complex. Further, look at the 2nd example I give. No complex sources there, right? Alfred Centauri 02:16, 29 March 2006 (UTC)[reply]

Phasor analysis cannot be used for power calculations at all, since the basis of phasor notation is a magnitude and angle for a single cosine at a particular frequency. You just can't multiply two phasors together and get a phasor out - that's why AC power analysis is trickier. To get the rms power, you can multiply the phasor representations of the rms values together to get a complex number that gives the real and imaginary power, but then the rms value of a zero-frequency current at will be 0... DukeEGR93 23:50, 6 November 2006 (UTC)[reply]

Phasor analysis can and is used for power calculations. Although your statement is correct that the product of two phasors is not a phasor, this fact does not imply your assertion. After all, the fact that impedance, being the ratio of two phasors, is not a phasor does not make impedance any less useful a concept. Similarly, the complex power given by the product of the voltage and conjugate current (rms) phasors, while not being a phasor, is nonetheless a valid and useful quantity whose real part is the time average power (not the 'rms' power!) and whose imaginary part is the peak reactive power.

Your statement that "the rms value of a zero-frequency current at will be 0." is not even wrong. There is no such thing as an rms value at pi/2. The rms value of a unit amplitude sinusoid - regardless of frequency or phase - is . Alfred Centauri 00:29, 7 November 2006 (UTC)[reply]

Well, we will have to disagree with that latter one since the rms of is certainly not - in the singular case of DC, the phase is important to determining the rms value. For the former, perhaps just as you are saying is not a sinusoid if , I would say that it is not phasor analysis being used to compute power (or find impedance values for that matter) but rather phasors used to find complex numbers that are not in and of themselves phasors. DukeEGR93 04:05, 7 November 2006 (UTC)[reply]

There's no room for disagreement here. By definition, the rms value of a sinusoid is:

Note that this result holds in the limit as the period T goes to infinity. Thus, your assertion that the phase is important in determining the rms value is obviously false - even in the case where the frequency is zero (infinite period). If this isn't clear to you yet, then think about the time average power deliverd to a resistor by a sinsoidal voltage source with an arbitrarily large period (T = 15 billion years for example) compared to that delivered by a true DC voltage source. Alfred Centauri 05:50, 7 November 2006 (UTC)[reply]

Presumably you are not saying the rms value of either or is , are you? Beyond that, I am absolutely correct that the rms value of depends on - there is simply no way around that, either. DukeEGR93 06:20, 7 November 2006 (UTC)[reply]
You have presumed incorrectly. I do indeed claim that the rms value of either or is . This is clearly so from the definition of the rms value of a sinusoidal function of time I gave above. That fact that you choose to ignore a valid mathematical result is troubling enough but then you proceed to compound your error by equating with . Don't you see? The former is a function of time but the later is not. If you do not see the difference, then integrate both expressions with respect to time and see what happens. Finally, your assertion that the rms value of a zero frequency sinusoidal function of time depends on the phase violates a fundamental principle - that absolute phase is not physically meaningful. To claim otherwise is equivalent to claiming that a choice of zero time has physical meaning.
Look, let's say you have a sinusoidal voltage source connected to a resistor. You claim that V. However, I claim that where . That is, my choice of zero time differs from your choice of zero time. Nonetheless, we both calculate the same rms voltage across the resistor. This is as it should be because the choice of zero time (or equivalently, the phase) is arbitrary so we should calculate the same average power delivered to the resistor. Note that I have not placed any constraint for the frequency here. In fact, according to the principle that the choice of zero time has no physical meaning, this result should hold for the case of zero frequency. However, according to your claim, in the case of zero frequency, we will calculate different average powers! Which one is correct? Which choice of time zero is correct? Surely you can see that this is an absurd result! Alfred Centauri 22:09, 7 November 2006 (UTC)[reply]
So, what is ? I believe you are banking too much on your definition when this is clearly a singular case worth of singular definition. Your transformation of variables has a major issue in that you are transforming a finite, bounded space into unbounded, infinite space. Though, in all this, I very much appreciate the prompts to really think about these things. One of my colleagues in the Math department, when posed with the question, immediately answered "I'm not a statistician," which I thought was a bit of a cop-out.  :) DukeEGR93 01:29, 8 November 2006 (UTC)[reply]
Phasors have a MAGNITUDE and a PHASE. The two are independent of each other, in this case the MAGNITUDE is actually an RMS value, but whether you divide by root(2) or not, the magnitude of the phasor is independent of phase. Remember, a phasor is always over a COMPLETE cycle, so it doesn't matter where in time you feel like labeling the start of that cycle. A complete cycle of a of a 0 freqency phasor in fact never ends - you just integrate on into infinity. The function does not converge... so the above definition of RMS value of a 0 frequency phasor is undefined: infinity over infinity. However, it can be shown that with DC you can choose ANYTHING besides two zeroes for your limits of integration and you just get the maginitude of the DC signal. In reality, the RMS value of a DC signal is just the magnitude of the DC signal. Just substitute 0 for T in the above equation. Or think about it graphically. A flat line at y=2 has a square of y=4. No matter where you start or stop integrating (as long as you eventually stop, i.e. you don't go to either infinity), you have a rectangle of area 4*T. This is then divided by T in the equation, so you have the square root of 4... which is 2.
Reply to anonymous. (1) The magnitude of a phasor can be the peak value of the associated sinusoid. It is not always an RMS value. (2) The statement "a phasor is always over a COMPLETE cycle" doesn't even rise to the level of being wrong. A phasor is a constant complex number - it has no time dependence so the notion of a cycle is meaningless in the context of a phasor. (3) Regarding your 'argument' that the RMS value of 0 frequency phasor is undefined: a phasor doesn't have a frequency. (4) Infinity over infinity is not undefined - it is indeterminate. (5) Does the Fourier integral of cos(t) converge? Alfred Centauri 01:41, 17 May 2007 (UTC)[reply]
Somehow all this reminded me of a quote, "There's no such thing as chaos, only really long transients." Struckthrough rms in front of power above. Long day... How about this - the sinusoidal representation of is really such that its phasor notation would be . That solves the average power problem, so long as you only use the rms version of , in that the result will be purely real given phase angles will always be zero for DC quantities given the above representation. Then again, there's no such thing as DC, only really long periods... DukeEGR93 04:59, 7 November 2006 (UTC)[reply]

While is true for finite t, it does not hold if t goes to infinity (0 * infinity can by any number). However, to take the rms value of a zero frequency sinusoid, we must in fact integrate over all time. It is for this reason that the rms value of the zero frequency sinusoid on the left is while the rms value of the zero frequency sinusoid on the right is . Alfred Centauri 06:16, 7 November 2006 (UTC)[reply]

Reply to Alfred's comment above me: "A phasor is a constant complex number": As a side note, while it's true phasors are complex constants in lumped circuits (which is what is being discussed hereñ, they're complex functions of one real variable spatial in transmission lines, and they're complex functions of three real spatial variables in electromagnetic waves. Alej27 (talk) 19:29, 30 October 2020 (UTC)[reply]

Different from the concept of Physics?[edit]

Well, I think that the phasors for electronics is an application of those concepts....

Gabriel

The electronics phasors dont behave like the vectors , and they dont obey the rules of vectors studied in Physics (Statics & Dynamics)

Nauman —Preceding unsigned comment added by 202.83.173.10 (talk) 11:55, 24 October 2007 (UTC)[reply]


Nauman is right ,Electronics phasors dont behave the same way as the physics phasors , they are different , for reference see Fundamentals of Electric Circuits by , Sergio Franco , chapter 10 , AC Response . —Preceding unsigned comment added by 202.83.164.243 (talk) 14:53, 11 November 2007 (UTC)[reply]

Transients & Phasor analysis[edit]

I removed the following text from the intro:

(Important note: The phasor approach is for "steady state" calculations involving sinusoidal waves. It cannot be used when transients are involved. For calculations involving transients, the Laplace transform approach is often used instead.)

I don't believe that this statement is necessary even if it were true but, that fact is, phasors can be used to analyze transients. The bottom line is that the complex coefficents of a Fourier series or the inverse Fourier integral are phasors.

It is usually stated that phasor analysis assumes sinusoidal steady state at one frequency and this is true as far as it goes. However, it is quite straightforward to extend phasor analysis to a circuit with multiple frequency sinusoidal excitation. When one does this, it is clear this extension is nothing other than frequency domain analysis. In other words, phasor analysis is frequency domain analysis at a fixed frequency. Alfred Centauri 23:22, 27 April 2007 (UTC)[reply]

This article (Phasor (electronics)) describes essentially the same thing as Phasor (physics). I believe there is no reason to maintain two different articles. The main difference between them is that this article describes the concept of phasors from the viewpoint of an engineer who uses it in a specific domain, while the other article is more general, but lacks some details that this one has. —The preceding unsigned comment was added by 129.177.44.96 (talk) 13:32, 2 May 2007 (UTC).[reply]

I agree they describe the same thing. The physics article takes a vector approach, while the electronics article is based on complex numbers. The electronics article is far more practical, while the physics article is far more theoretical, and IMO, less useful. Concatenating the electronics article to the physics article as is would probably be a good idea. Neither article is very long, though the electronics article could do without the power engineering section - it doesn't add much to the concept of phasors.

All I can say is that the Phasor (electronics) page helped me pass ECE 203. The more general physics page wouldn't have helped nearly as much.

Merging does not mean removing information. The techniques discussed in Phasor (electronics) do not only apply to electronics. A more general context would be appropriate. —The preceding unsigned comment was added by 129.177.44.134 (talk) 15:11, 10 May 2007 (UTC).[reply]


THEY ARE THE SAME KNOWLEDGE. THEY ARE THE SAME MATERIAL, THEY ARE SYNOMINOUS. To make people happy just put the same concepts at both places.


Would it be okay to call "Phasor (Physics)" "Phasor" and rename "Phasor (electronics)" to "Application of phasors in electronics" or something of the like? All redundant material introducing abstract phasors could be deleted from the latter, and it could be considered building on the former. —1st year EE student —Preceding unsigned comment added by 128.54.192.216 (talk) 16:07, 27 September 2007 (UTC)[reply]

I agree with the above suggestion. We can call the Phasor (Physics) simply Phasor and rename the electronics article to Applications of Phasors in Electronics. We would cut down on redundancy, and make both articles mesh together better.xC | 22:16, 1 November 2007 (UTC)[reply]

Definately merge them. Mpassman (talk) 18:18, 17 November 2007 (UTC)[reply]

The Role of Linearity[edit]

I like the section on Phasor arithmetic and I suggest noting that the role of linearity is also an important component of the technique. If, for example, the differential equation in the example were not linear then phasors would be for naught. gaussmarkov 0:28, 6 September 2007 (UTC)

Done. Alfred Centauri 00:52, 6 September 2007 (UTC)[reply]

Importance of the properties of the Re{} operator[edit]

I think it should be better explained why the Re{} operator is usually dropped before some complicated algebra and then reapplied at the end. e.g. how the differential operator can be moved inside the Re{} operator. -Roger 23:14, 1 November 2007 (UTC)[reply]

Isn't the explanation simply the orthogonality of even and odd functions? Alfred Centauri 02:46, 2 November 2007 (UTC)[reply]


Orthogonality (or something a little more elusive) is the reason for wanting to do operations in the complex domain, as I will try to explain. Linearity, loosely defined as operations that affect the Re and Im components independently, is a characteristic of certain mathematical operations that may be moved inside the Re operator without changing the net effect. Such operations include differentiation, integration, time-delay, multiplication by a (real-valued) scalar, and addition. No orthogonality is required to do these things.

If we limited our phasor arithmetic to those kinds of operations, the Im part we chose would not matter, and there would be no benefit at all. The benefit comes from choosing an orthogonal waveform to the Re component, such that:

which has the useful property:

whereas:

We see that in the complex domain, a time-delay can be represented more simply by just the product with a complex scalar (which is why impedances are complex).

  • Multiplication of 2 phasors, such as by a mixer or a square-law rectifier, is not linear. Phasors may look like scalars, but one has to remember that they are actually a shorthand notation for a time-variant waveform.

So it appears that the motivation for working in the complex domain is to simplify the effect of time-delays / phase-shifts caused by linear devices with "memory".

--Bob K 15:27, 12 November 2007 (UTC)[reply]

problem with the new introduction[edit]

The word "phasor" has two meanings. One meaning includes the factor, and the other excludes it. Until recently these were relegated to different articles, one called "Phasor (physics)" and the other called "Phasor (engineering)" (I think). But there was fairly strong sentiment to merge the articles, which resulted in this one. The new introduction appears to be heading back to the "Phasor (engineering)" article.

--Bob K (talk) 17:29, 17 December 2007 (UTC)[reply]

My bias is showing; but the opening paragraph should follow the forumula of giving at least a concise definition of the subject. The introduction isn't new, it's a rehash of a paragraph from a month or so ago. Let's rewrite the opening so that it allows both definitions. If the meanings are really disparate we need two articles, though I don't think we ever had "Phasor (engineering)". --Wtshymanski (talk) 17:43, 17 December 2007 (UTC)[reply]

OK, I looked back. It was "Phasor (electronics)". I'm sure we can come up with a suitable introduction. It really is the same concept in both disciplines. But the physicists are less inclined to solve circuit equations.

--Bob K (talk) 23:13, 17 December 2007 (UTC)[reply]

Phasor diagrams[edit]

I think that there should be separated article dedicated to phasor diagrams - describing their making and meaning.

--User:Vanished user 8ij3r8jwefi 17:06, 21 May 2008 (UTC)[reply]

sinusoids are easily expressed in terms of phasors, which are more convenient to work with than sine and cosine functions. a phasor is a complex number that represents the amplitude and phase of a sinusoid. phasors provide a simple means of analyzing linear circuits —Preceding unsigned comment added by 210.48.147.2 (talk) 07:00, 27 July 2008 (UTC)[reply]

Phasors in Communication Systems[edit]

Does anyone have images of phasors in relation to FM and PM modulation? That would be a good section to add. —Preceding unsigned comment added by Daviddoria (talkcontribs) 17:27, 14 September 2008 (UTC)[reply]

Clarify "sine wave"[edit]

The section called "Definition" shows only cosine functions. The term "sine wave" is used to refer to expressions that involve only cos(...) expressions, I think that this customary paradox of mathematical terminology should be mentioned even though the definition in the article refers only to the complex number representation.

Tashiro (talk) 17:43, 1 February 2009 (UTC)[reply]

Really, is this going to confuse anyone? There must be about 0.001% of the population who were awake for the math class that talked about 'sine' and then ran out before hearing about 'cosine'. The distinction is immaterial for the purposes of this article. --Wtshymanski (talk) 18:06, 1 February 2009 (UTC)[reply]
I think the point is not only that, but: if anyone has a serious sine/cosine confusion, do you really expect the complex (no pun intended) definition of such functions, which is on the very same line, not to scare them out of the article? Habbit (talk) 18:11, 1 February 2009 (UTC)[reply]

The Wikipedia article on sine waves defines them by using the sin() function. It says that cosine waves are "sinusoidal". I understand that this is essentially a quibble, but the history of when people walked out of class may be that they walked out of class 40 years ago, so which facts are remembered is rather random after that period of time. It would only require one sentence to clarify this. Tashiro (talk) 06:50, 9 February 2009 (UTC)[reply]

You are ignoring the pink elephant in the middle of the room: people confusing sin and cos will not understand the following concepts used in the article:
  • Complex algebra
  • De Moivre's identity and/or the definition of sin and cos as complex exponentials
  • Complex analysis (differentiation and integration)
  • The confusing mixup of all of the above
Why I am so sure? Because trigonometry appears constantly in complex algebra! So yes, the "sine wave" reference could be clarified just right, but that would be like turning "Hello, 友人" into "Hello (greeting), 友人". It is indeed clearer, but just by an epsilon and does not help the understanding of the rest of the article, while a change like "Hello, 友人 (friend)" would radically do so. Habbit (talk) 21:08, 9 February 2009 (UTC)[reply]


I made "sine waves" an internal link. The linked article already contains this clarification:

A cosine wave is said to be "sinusoidal", because which is also a sine wave with a phase-shift of π/2.

--Bob K (talk) 22:04, 10 February 2009 (UTC)[reply]

The illustration: "Graph of a phasor as a rotating vector"[edit]

I teach a course on linear circuits, including a section on Phasors. The illustration titled "Graph of a phasor as a rotating vector" is both helpful and awkward at the same time. Some students find it harder to understand a typical phasor diagram (stationary, drawn on paper) after looking at the spinning phasor illustrated. I have two suggestions.

1.) In the top half of the illustration, some students cannot fathom that in the present illustration the time axis itself is sinking down the screen as the vector rotates. Instead, fix the time axis in place and let the dot move both vertically and horizontally. One could show exactly one cycle of a (stationary) sinusoid so that as the phasor rotates through one revolution a dot on the sinusoid sweeps upwards following the function, then rolls over to the bottom and starts upwards again with the next rotation of the phasor. A dashed line to show the dot's projection on the time axis would also help. Be sure the sinusoid has some initial phase offset, for example,

y(t) = cos(t + 2pi/3)

Label the horizontal y(t) axis with "+1" on the right and "-1" on the left. Label the vertical axis with a "t" at or near the top and with a few tic marks labeled in radians from 0 to 2pi.

2.) On the bottom half of the illustration create a strobe-light effect to illustrate a stationary view of the phasor, as would be drawn on paper. Show the rotating phasor dimmed to grey and dark blue most of the time. When the dot in the top half of the illustration rolls over to the t = 0 spot, show the bottom half with a bright white background and bright blue lines (as is presently shown). Then as the phasor continues to rotate, return to the dimmed view in the bottom half of the illustration. What I have in mind is the effect of a strobe light that flashes at time t = 0 and every period (T, 2T, 3T, etc.) thereafter, thus illustrating how we interpret a phasor diagram drawn on paper. Students need to understand that what we draw statically on a page of paper represents the angular relations at time t = 0.

(I sure wish I had the tools to put such an illustration together and just show it!)

Dfdeboer (talk) 22:58, 18 November 2009 (UTC)[reply]

The rotating phasor isn't actually a phasor; phasors are time-independent, so they can't rotate by themselves. What rotates are sinors (also known as instantaneous complex value), which are equal to a phasor times e^(j w t). --Alej27 (talk) 19:06, 30 October 2020 (UTC)[reply]

Phasor addition missing steps[edit]

The Phasor Addition section took me a little bit of time to make sense of. The step by step calculation is trivial for the first few step (combining terms in the Re operator, etc), and then out of nowhere: Boom! A3 and theta3 are introduced with a fairly complex definition for each. It eventually made sense to me when I drew out the phasor diagram and looked at the x/y components, but something should help fill that gap. --Gradenko (talk) 07:44, 23 January 2011 (UTC)[reply]

Formatting errors[edit]

I am not sure how to fix them, so it would be great if somebody else could fix the syntax so that the equations start showing up. Cheers. Vignesh.Mukund (talk) 11:20, 10 February 2014 (UTC)[reply]

Origin[edit]

The article had quite a poorly developed lead. I've added more essentials, including the origin as it is commonly attributed in (US) EE textbooks, but upon glancing on some Google Books snippets from Kline's biography of Steinmetz, I think the origin story was more complex. Alas I don't have immediate access to Kline's, but it's something to keep in mind (and go to) if counterclaims appear. 86.121.137.79 (talk) 22:11, 13 January 2015 (UTC)[reply]

Added an extra section of why study phasors and a bit more information on phasor diagrams[edit]

Phasors are used almost everywhere in power engineering and their usefulness should be linked. There is also not a significant mention of lead and lag, which is important when we compare between voltages and currents. A picture would help to illustrate the example of lead and lag for example. Phase angles also should be mentioned as being relative to each other and that you can set a reference point(which is usually the voltage source.) — Preceding unsigned comment added by 204.98.7.11 (talk) 06:15, 7 December 2016 (UTC)[reply]

Useless[edit]

This article is very good for we that have studied phasors. However, it is useless to a newcomer. They will leave with less knowledge,,, why am I explaining this? Try to fix it. Over and out. — Preceding unsigned comment added by Longinus876 (talkcontribs) 15:41, 15 December 2017 (UTC)[reply]

What's really useless is your non-specific criticism and lack of constructive suggestions. It's not WikiPedia's job to write every article from the bottom up assuming the reader has no prerequisite knowledge of the topic. Please show a little appreciation and respect for those (unlike yourself) who have actually volunteered their time and experience to creating an article that is at least (in your words) "very good for we that have studied phasors".  Over and out.
--Bob K (talk) 15:27, 29 April 2020 (UTC)[reply]

Mistake on the definition of sinor?[edit]

In the first paragraph of this article, it is stated that phasors are also called sinors (in older textbooks), and even a reference was cited to this. So, it seems like this is true.

However, in the book Fundamentals of Electric Circuits by Charles Alexander and Matthew Sadiku, phasors and sinors are defined differently. In the case of phasors, they are defined as usual, that is as Ae^(jθ); but sinors are defined as Ae^(jθ)e^(jωt) = Ae^(j(θ+ωt)).

I think this should be cleared, so I'd like to confirm if, whoever who cited that sinors are the same as phasors, actually didn't confuse those two terms. Perhaps they could share a screenshot/photo of the page of the cited textbook.

Alej27 (talk) 07:01, 31 March 2019 (UTC)[reply]

Physics vs EE sign conventions[edit]

I was recently editing the article of Fourier transforms, and I found a citation to a source with a ton of information on phasors. It's about the difference between i and j as representations of the complex component of phasors, as i is the usual standard in theoretical physics and j is the usual standard in electrical engineering, and all of the sign conventions that relate to that. It seems to be of sufficient relevance and importance to warrant being included in this article, but this doesn't mention anything of the sort. I don't have sufficient interest to want to invest the time to add the info to this article, but I figured I'd mention it here so that someone with an interest could. The source is "Sign Conventions in Electromagnetic (EM) Waves" (PDF). 74.96.192.195 (talk) 08:54, 8 October 2021 (UTC)[reply]

There's some very odd claims going on in that paper which I would treat as dubious without reliably published sources backing it up. The j in EE is defined exactly the same as the i in mathematics. I find it hard to believe that i (physics) = −j, and by implication, = −i (mathematics). Likewise, the idea that physics defines the positive direction of phasor rotation as clockwise, in cotradistinction to EE, is also debatable. In mathematics, the positive direction of rotation is right-handed rotation, that is, anticlockwise as looking down on the axis of rotation. I think the author is discussing one particular quantity (complex refractive index) where there is a difference of definition and drawing unwarranted conclusions from it by implication. SpinningSpark 17:37, 8 October 2021 (UTC)[reply]