Age | Commit message (Collapse) | Author |
|
At present early clock init happens in SPL. If SPL did not run (because
for example U-Boot is chain-loaded from another boot loader) then the
clocks are not set as U-Boot expects.
Add a function to detect this and call the early clock init in U-Boot
proper.
Signed-off-by: Simon Glass <sjg@chromium.org>
|
|
A future patch will implement a clock uclass driver for Tegra. That driver
will call into Tegra's existing clock code to simplify the transition;
this avoids tieing the clock uclass patches into significant refactoring
of the existing custom clock API implementation.
Some of the Tegra clock APIs that manipulate peripheral clocks require
both the peripheral clock ID and parent clock ID to be passed in together.
However, the clock uclass API does not require any such "parent"
parameter, so the clock driver must determine this information itself.
This patch implements new Tegra- specific clock API
clock_get_periph_parent() for this purpose.
The new API is implemented in the core Tegra clock code rather than SoC-
specific clock code. The implementation uses various SoC-/clock-specific
data. That data is only available in SoC-specific clock code.
Consequently, two new internal APIs are added that enable the core clock
code to retrieve this information from the SoC-specific clock code. Due to
the structure of the Tegra clock code, this leads to some unfortunate code
duplication. However, this situation predates this patch.
Ideally, future work will de-duplicate the Tegra clock code, and migrate
it into drivers/clk/tegra. However, such refactoring is kept separate from
this series.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Currently, Tegra peripheral drivers control two aspects of their HW module
clock(s):
1) The clock enable/rate for the peripheral clock itself.
2) The system-level clock tree setup, i.e. the clock parent.
Aspect 1 is reasonable, but aspect 2 is a system-level decision, not
something that an individual peripheral driver should in general know
about or influence. Such system-level knowledge ties the driver to a
specific SoC implementation, even when they use generic APIs for clock
manipulation, since they must have SoC-specific knowledge such as parent
clock IDs. Limited exceptions exist, such as where peripheral HW is
expected to dynamically switch between clock sources at run-time, such
as CPU clock scaling or display clock conflict management in a multi-head
scenario.
This patch enhances the Tegra core code to perform system-level clock
tree setup, in a similar fashion to the Linux kernel Tegra clock driver.
This will allow future patches to simplify peripheral drivers by removing
the clock parent setup logic.
This change is required prior to converting peripheral drivers to use the
standard clock APIs, since:
1) The clock uclass doesn't currently support a set_parent() operation.
Adding one is possible, but not necessary at the moment.
2) The clock APIs retrieve all clock IDs from device tree, and the DT
bindings for almost all peripherals only includes information about the
relevant peripheral clocks, and not any potential parent clocks.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Fix a number of typos, including:
* "compatble" -> "compatible"
* "eanbeld" -> "enabled"
* "envrionment" -> "environment"
* "FTD" -> "FDT" (for "flattened device tree")
* "ommitted" -> "omitted"
* "overriden" -> "overridden"
* "partiton" -> "partition"
* "propogate" -> "propagate"
* "resourse" -> "resource"
* "rest in piece" -> "rest in peace"
* "suport" -> "support"
* "varible" -> "variable"
Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
|
|
On currently supported SoCs, clk_m always runs at the same frequency as
the oscillator input. However newer SoC generations such as Tegra210 no
longer have that restriction. Prepare for that by separating clk_m from
the oscillator clock and allow SoC code to override the clk_m rate.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Added PLL variables (dividers mask/shift, lock enable/detect, etc.)
to new pllinfo struct for each Soc/PLL. PLLA/C/D/E/M/P/U/X.
Used pllinfo struct in all clock functions, validated on T210.
Should be equivalent to prior code on T124/114/30/20. Thanks
to Marcel Ziswiler for corrections to the T20/T30 values.
Signed-off-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Tested-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Added 38.4MHz/48MHz entries to pll_x_table for CPU PLL. Needs
to be measured - should be close to 700MHz (1.4G/2).
Note that some freqs aren't in the PLLU table in T210 TRM
(13, 26MHz), so I used the 12MHz table entry for them. They
shouldn't be selected since they're not viable T210 OSC freqs.
Since there are now 2 new OSC defines, all tables (pll_x_table,
PLLU) had to increase by two entries, but since 38.4/48MHz are
not viable osc freqs on T20/30/114, etc, they're just set to 0.
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Add a simple function to enable external clocks.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Create a function which sets the source clock for a peripheral, given
the number of mux bits to adjust. This can then be used more generally.
For now, don't export it.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
The get_pll() function can do the wrong thing if passed values that are
out of range. Add checks for this and add a function which can return
a 'simple' PLL. This can be defined by SoCs with their own clocks.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Based on the Tegra TRM, the system clock (which is the AVP clock) can
run up to 275MHz. On power on, the default sytem clock source is set to
PLLP_OUT0. In function clock_early_init(), PLLP_OUT0 will be set to
408MHz which is beyond system clock's upper limit.
The fix is to set the system clock to CLK_M before initializing PLLP,
and then switch back to PLLP_OUT4, which has an appropriate divider
configured, after PLLP has been configured
Implement this logic in new function tegra30_set_up_pllp(),
which sets up PLLP and all PLLP_OUT* dividers, and handles the AVP
clock switching. Remove the duplicate PLLP setup from pllx_set_rate()
and adjust_pllp_out_freqs().
Signed-off-by: Jimmy Zhang <jimmzhang@nvidia.com>
[swarren, significantly refactored the change]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Since all code that sets or interprets MASK_BITS_* now uses the enums
to define/compare the values, there is no need for MASK_BITS_* to have
a specific integer value. In fact, having a specific integer value may
encourage people to hard-code those values, or interpret the values in
incorrect ways.
As such, remove the logic that assigns a specific value to the enum
values in order to make it completely clear that it's just an enum, not
something that directly represents some integer value.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
The only place where the MASK_BITS_* values are used is in
adjust_periph_pll(), which interprets the value 4 (old MASK_BITS_29_28,
new MASK_BITS_31_28) as being associated with mask OUT_CLK_SOURCE4_MASK,
i.e. bits 31:28. Rename the MASK_BITS_ macro to reflect how it's actually
implemented.
Note that no Tegra clock register actually uses all of bits 31:28 as
the mux field. Rather, bits 30:28, 29:28, or 28 are used. However, in
those cases, nothing is stored in the bits above the mux field, so it's
safe to pretend that the mux field extends all the way to the end of the
register. As such, the U-Boot clock driver is currently a bit lazy, and
doesn't distinguish between 31:28, 30:28, 29:28 and 28; it just lumps
them all together and pretends they're all 31:28. This patch doesn't
cause this issue; it was pre-existing. Hopefully, future patches will
clean this up.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
The enum used to define the set of register bits used to represent a
clock's input mux, MUX_BITS_*, is defined separately for each SoC at
present. Move this definition to a common location to ease fixing up
some issues with the definition, and the code that uses it.
Signed-off-by: Tom Warren <twarren@nvidia.com>
[swarren, extracted from a larger patch by Tom]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
The CPU complex reset masks are not matching with the datasheet for
the CLK_RST_CONTROLLER_RST_CPU_CMPLX_SET/CLR_0 registers. For both T20
and T30 the register consist of groups of 4 bits, with one bit for
each CPU core. On T20 the 2 high bits of each group are always stubbed
as there is only 2 cores.
Signed-off-by: Alban Bedel <alban.bedel@avionic-design.de>
Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swrren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Signed-off-by: Wolfgang Denk <wd@denx.de>
[trini: Fixup common/cmd_io.c]
Signed-off-by: Tom Rini <trini@ti.com>
|
|
T114 needs the SYSCTR0 counter initialized so the TSC can be
read by the kernel. Do it in the bootloader since it's a write-once
deal (secure/non-secure mode dependent).
Signed-off-by: Tom Warren <twarren@nvidia.com>
Reviewed-by: Stephen Warren <swarren@nvidia.com>
|
|
This 'commonizes' much of the clock/pll code. SoC-dependent code
and tables are left in arch/cpu/tegraXXX-common/clock.c
Some T30 tables needed whitespace fixes due to checkpatch complaints.
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Common Tegra files are in arch-tegra, shared between T20 and T30.
Tegra30-specific headers are in arch-tegra30. Note that some of
these will be filled in as more T30 support is added (drivers,
WB/LP0 support, etc.). A couple of Tegra20 files were changed
to support common headers in arch-tegra, also.
Signed-off-by: Tom Warren <twarren@nvidia.com>
Reviewed-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Simon Glass <sjg@chromium.org>
|
|
Common practice on Tegra 2 boards is to use the pllp_out4 FO
to generate the ULPI reference clock. For this to work we have
to override the default hardware generated output divider.
This function adds a clean way to do so.
Signed-off-by: Lucas Stach <dev@lynxeye.de>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
The move is pretty straight-forward. ap20.h and tegra20.h were renamed to ap.h and tegra.h.
Some files remain in arch-tegra20 but 'include' a file in 'arch-tegra' with #defines & structs
that will be common between T20 and T30 HW. HW-specific #defines, etc. stay in the 'arch-tegra20'
'root' file.
All boards build OK w/MAKEALL -s tegra20. Checkpatch.pl runs clean. Seaboard works OK.
Signed-off-by: Tom Warren <twarren@nvidia.com>
|