E V Forum
Sign up

  Author   Comment  

Junior Member
Posts: 1
Reply with quote  #1 
For posting my problem clearly  that I've do some tests and collect data of usb "history". Maybe it looks like so boring, but anyway I like this program. 
The another question is how does device know what current is charger able to provide?

Ok, Let me look at data: USB-IF's standards define some current limits chronologically:

USB 2.0 (2000):

  • 100 mA for low-power device
  • 500 mA for high-power device

Battery Charging specification (BCS) 1.0 (2007):

  • 1.5 A for charging ports

USB 3.0 (2008):

  • 150 mA for low-power device
  • 900 mA for high-power device

Battery Charging specification 1.2 (2010):

  • 5 A for charging ports

And subsequently USB Power Delivery (PD) since 2012, which use negotiation protocol on power pin to find out host max. current.

Client device should enumerate host to determine current which is possible for host to provide. However can safely assume at least 100 (150) mA even without it. If client found out that it is connected to charging port (CP), it can safely drain up to 1.5 A of current. In case of standard downstream port (SDP), client have to ask for high-power mode using USB protocol communication (which host might or might not allow), before draining more current.

USB protocol is somewhat complicated (I doubt chargers are talkative), BCS implementation is more prevalent, isn't it?

And there is problem having my interest. When I connect device to charger (at least one of side surely doesn't implement USB PD), with maximal output current below standard 1.5 A, device starts charging with more than 100 mA, even more than 500 mA. But how does it know it is possible? Output value is lower than BCS 1.5 A, so charger shall not identify itself as charging port and thus device shall doesn't exceed 500 mA.

I did some tests falsificating this.

I used these chargers (max. current, DC voltage):

  • a Few Years Old charger with 2 ports – totally 1 A
  • 2016 tablet charger – 2 A, 5.2 V
  • 2010 charger – 150 mA LOL (but that is probably mistake, because it is unbranded ...), 5.6 V

and devices:

  • 2011 Xperia mini pro
  • 2013 Moto X
  • 2016 Lenovo Tab 2 A10-30

Test's results (same 12-cm cable used, devices charge below 50 %, off screen, max. charge current after a while):

  • 2011 phone + 2010 charger = 400 mA
  • 2011 phone + FYO = 750 mA
  • 2011 phone + tablet charger = 650 mA or 750 mA (I did more tests and every results in either of two values)
  • 2013 phone + 2010 charger = 600 mA
  • 2013 phone + FYO = 1000 mA (charger max.)
  • 2013 phone + 2016 tablet charger (max. 2 A) = 1300 mA (apparently device max.)
  • tablet + 2010 charger = 1000 mA
  • tablet + FYO = 1150 mA (exceeding maximum)
  • tablet + its charger = 1900 mA

I noticed two strange things (cases):

  1. Every device drains different current from first charger. Tablet does have USB 2 (Snapdragon 210), so it cannot be negotiated high-power supply of USB 3.
  2. Exceeding maximum current of FYO by tablet seems dangerous in compare to Moto X which relatively strictly keep at maximum specified on label.

I read answer on similar question, but this confuses me. Charger is voltage source, so how is possible for charger voltage drop down from nominal ~5 V to below 2 V? Does charger as a power source always have so big internal resistance, that device must lower current to achieve 4.3 V on battery (in case of li-ion accumulator) because of voltage losses?

So final question:
How is exactly discovering of maximum current drain done? And is it safe (because draining more current, than charger have been designed for, might be a serious problem – vide test)? Or should I compare output of device's original and actual charger before charging?

And "bonus" question. How does BCS 1.2 device recognize between BCS 1.0 and 1.2 charger (up to 1.5 A vs. 5 A)?


From Maxim:

In USB 2.0, it is during enumeration and configuration that the device learns how much current a USB port can source. Enumeration and configuration require a digital conversation between the device and the host. BC1.1 expands the USB spec. In addition to the USB 2.0 options, BC1.1 also allows "dumb" methods of determining port type so that, with some ports, charging can take place without enumeration.

So enumeration about power supply does connected device (because it is device, which controls current on bus).

Enumeration is described by Maxim as:

The initial data exchange between the device and the host to identify device type.

From wiki:

The USB specification required that devices connect in a low-power mode and communicate their current requirements to the host, which then permits the device to switch into high-power mode.

In fact either permit, or deny. Because also SDP http://www.kynix.com/Parts/3718822/SDP.html is either low-power or high-power.
And finally, thank you for taking the time to read my questions. All Suggestions are useful to me.


Avatar / Picture

Posts: 96
Reply with quote  #2 
Its more the device which determines how much current it draws, the charger maximum is just the most it can put out.
Its all ohms law V=IR and the voltage drop you mention could be for all sorts of reasons, for example there may be electronics inside the device which upregulates the voltage internally to charge the battery
Previous Topic | Next Topic

Create your own forum with Website Toolbox!